The value of high level unit tests and mock objects [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am beginning to believe that unit testing high level, well-written code, which requires extensive use of mock objects, has little to no value. I am wondering if this assertion is correct, or am I missing something?
What do I mean by high level? These are the classes and functions near the top of the food chain. Their input and output tends to be user input, and user interface. Most of their work consists of taking user input and making a series of calls to lower-level entities. They often have little or no meaningful return values.
What do I mean by well-written? In this case, I am referring to code that is decoupled from its dependencies (using interfaces and dependency injection), and line by line is at a consistent level of abstraction. There's no tricky algorithms, and few conditionals.
I hate writing unit tests for this kind of code. The unit tests consist almost entirely of mock object setup. Line by line, the unit test read almost like a mirror image of the implementation. In fact, I write the unit tests by looking at the implementation. "First I assert this mock method is called, then I assert this mock method is called...", etc. I should be testing the behavior of the method, not that it's calling the right sequence of methods. Another thing: I have found that these tests are extremely fragile to refactoring. If a test is so brittle it utterly shatters and must be rewritten when the code under test is refactored, then hasn't one of the major benefits of unit testing been lost?
I don't want this post to be flagged as argumentative, or not a question. So I'll state my question directly: What is the correct way to unit test the kind of code I have described, or is it understood that not everything needs a unit test?

In my experience, the lower level your code is (short of being trivial), the more value unit tests are, relative to the effort required to write them. As you get higher up the food chain, tests become increasingly elaborate and more expensive.
Unit tests are critical because they tell you when you break something during refactoring.
Higher level tests have their own value, but then they are no longer called unit tests; they are called integration tests and acceptance tests. Integration tests are needed because they tell you how well the different software components work together.
Acceptance tests are what the customer signs off. Acceptance tests are typically written by other people (not the programmer) in order to provide a different perspective; programmers tend to write tests for what works, testers try to break it by testing what doesn't work.
Mocking is only useful for unit tests. For integration and acceptance tests, mocking is useless because it doesn't exercise the actual system components, such as the database and the communication infrastructure.

An aside
Just to touch on your bolded statement:
"I should be testing the behavior of
the method, not that it's calling the
right sequence of methods"
The behaviour of the object-under-test is the sequence of actions it takes. This is actually "behaviour" testing, whereas when you say "behaviour of the method", I think you mean stateful testing, as in, give it an input and verify the correct output.
I make this distinction because some BDD purists go so far as to argue that it is much more meaningful to test what your class should be calling on, rather than what the inputs and outputs are, because if you know fully how your system is behaving, then your inputs and outputs will be correct.
A response
That aside, I personally never write comprehensive tests for the UI layer. If you are using an MVVM, MVP or MVC pattern for your application, then at a "1-developer team" level, it's mind-numbing and counter-productive for me to do so. I can see the bugs in the UI, and yes, mocking behaviours at this level tends to be brittle. I'm much more concerned with making sure that my underlying domain and DAL layers are performing properly.
What is of value at the top level is an integration test. Got a web app? Instead of asserting that your controller methods are returning an ActionResult (test of little value), write an integration test that requests all the pages in your app and makes sure there are no 404's or 403's. Run it once on every deployment.
Final answer
I always follow the 80/20 rule with unit testing. To get that last 20% coverage at the high level you are talking about, is going to be 80% of your effort. For my personal and most of my work projects, this doesn't pay off.
In short, I agree. I would write integration tests, and ignore unit tests for the code you describe.

I think it is highly dependent on environment. If you are on relatively small team, and can maintain test integrity, then the more complex parts of your application should have unit tests. It is my experience that maintaining test integrity on large teams is quite difficult, as the tests are initially ok until they inevitably break...at which point they are either a) "fixed" in a way which completely negates their usefulness, or b) promptly commented out.
The main point of Mock testing seems to be so that managers can claim that the code-coverage metric is at Foo%....so everything must be working! The one exceptional case where they are possibly useful is when you need to test a class which is a huge pain to recreate authentically(testing an Action Class in Struts, for example).
I am a big believer in writing raw tests. Real code, with real objects. The code inside of methods will change over time, but the purpose and therefore the overall behaviour usually does not.

If doing TDD you should not write tests after implementation but rather the other way around. This way you'll also avoid the problem of making a test conform to the written code. You probably do have to test certain method calls within those units, but not their sequence (if it's not imperative to the domain problem - business process).
And sometimes it's perfectly feasible not to write a test for a certain method.

In general, I consider testing this type of method/command to be ripe for the integration testing level. Specifically, I "unit test" for smaller, low level commands that (generally) don't have side effects. If I really want to unit test something that doesn't fit that model, the first thing I do is see if I can refactor/redesign to make it fit.
At the higher, integration (and/or system) testing level, I get into the testing of things that have side effects. I try to mock as little as possible (possibly only external resources) at this point. An example would be mocking the database layer to:
Record how it was called to get data
Return canned data Record how it was
Record how it was called to insert manipulated data

Related

Any good arguments for not writing unit tests? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have read a lot of threads about the joys and awesome points of unit testing. Is there have a good argument against unit testing?
In the places I have previously worked, unit testing is usually used as a reason to run with a smaller testing department; the logic is "we have UNIT TESTS!! Our code can't possibly fail!! Because we have unit tests, we don't need real testers!!"
Of course that logic is flawed. I have seen many cases where you cannot trust the tests. I have also seen many cases where the tests become out of date due to tight time schedules - when you have a week to do a big job, most developers would spend the week doing the real code and shipping the product, rather than refactoring the unit tests for that first week, then pleading for at least another week to do the real code, and then spending a final week bringing the unit tests up to date with what they actually wrote.
I have also seen cases where the business logic involved in the unit test was more monstrous and hard to understand than the logic buried in the application. When these tests fail, you have to spend twice as long trying to work out the problem - is the test flawed, or the real code?
Now the zealots are not going to like this bit: the place where I work has largely escaped using unit tests because the developers are of a high enough calibre that it is hard to justify the time and resource to write unit tests (not to mention we use REAL testers). For real. Writing unit tests would have only given us minimal value, and the return on investment is just not there. Sure it would give people a nice warm fuzzy feeling - "I can sleep at night because my code is PROTECTED by unit tests and consequently the universe is at a nice equilibrium", but the reality is we are in the business of writing software, not giving managers warm fuzzy feelings.
Sure, there absolutely are good reasons for having unit tests. The trouble with unit testing and TDD is:
Too many people bet the family farm on it.
Too many people use it as a religion rather than just a tool or another methodology.
Too many people have tried to make money out of it, which has skewed how it should be used.
In reality, it should be used as one of the tools or methodologies that you use on a day to day basis, and it should never become the single methodology.
It's important to understand that it's not free. Tests require effort to write - and, more importantly, maintain.
Project managers and development teams need to be aware of this.
Virtually none of my bugs would have been found by unit testing. My bugs are mostly integration or unexpected-use-case bugs, which in order to have found them earlier, more extensive (and ideally automated) system tests would have been the best bet.
I'm waiting for more evidence-based and less religion-based arguments for unit testing, as dummymo said. And I don't mean some experiment in some academic setting; I mean an argument that for my development scenario and programming ability, cost-benefit would be positive.
So, to agree with other answers to the OP: because they cost time and cost-benefit is not shown.
You have a data access layer that isn't easy adapted for mocking.
The simple truth is, when you write some code, you have to make sure it works before you say it's done. Which means you exercise it - build some scaffolding to call the function, passing some arguments, checking to make sure it returns what you expect. Is it so much extra work, to keep the scaffolding around, so you can run the tests again?
Well yes, actually, it can be. More often than not the tests will fail, even when the code is right, because the data you were using is no longer consistent, etc.
But if you have a unit testing framework in place, the cost of keeping the test code around can be only marginally more work than throwing it away. And while yes, you'll find that many of your test cases will fail because of problems with the data you are using, instead of problems with the code, that will happen less as you learn how to structure your tests so as to minimize the problem.
True, passing your unit tests does not guarantee that your system works. But it does provide some assurance that certain subsystems are working, which isn't nothing. And the test cases provide useful examples of how the functions were meant to be called.
It's not a panacea, but it's a useful practice.
Formal verification.
If you can formally prove the correctness of code, there is no reason to unit test it unless the test condition brings in new variables, in which case, you'd still only have a small amount of unit tests (or prove for the new variables).
Unit tests will tell you whether one specific class method sets a variable correctly (or some variation on that). That does not, in any way, shape, or form, indicate that your application will behave properly or that it will handle the circumstances it will need to handle.
Any problem you can think to write a test for, you are going to handle in your code, and that problem is never going to show up. So then you have 300 tests passing but real-world scenarios you just didn't think to test for. The effort required to create and maintain the tests, then, isn't necessarily worth it.
It's the usual cost/benefit analysis.
Cost: You need to spend time developing and maintaining the tests, and put resources into actually running them.
Benefits are well known (mostly cheaper maintenance/refactoring with less bugs).
So, you balance one against the other in the context of the project.
If it's a throwaway quick hack you know will never be re-used, unit tests might not make sense. Although to be honest, if I had a dollar for every throwaway quick hack that I saw running years later or worse, had to maintaing/refactor years later, I'd probably be able to be one of venture capitalists investing into SO :)
Non-deterministic outcomes.
In simple cases you can seed the random generator(s) (or mock them somehow) to get reproducible results but when the algorithm is complex this becomes impossible as code changes will alter the demand for random numbers and thus alter the results.
This would rarely be encountered in a business situation but it's quite possible in games.
Testing is like insurance.
You don't put all your money in to it.
But you don't avoid your life insurance. (People form US should still be remembering the Health Insurance Bill).
Insurance is ESSENTIAL evil.
BUT BUT BUT...
You don't get insured expecting a Fatal accident to recover all the money you put into your insurance plan.
In summary,
There is SOME reason to write Tests.
Unit tests are sometimes One of the many ways to go forward
BUT There is NO REASON to just to focus entirely on writing (Unit) Tests.
It can discourage experimenting with several variations, especially in early stages of a project. (But it can also encourage experimenting in later stages!)
It can't replace system testing, because it doesn't cover the relationship between components. So if you have to split up the available testing time between system testing and unit testing, then too much unit testing can have a negative impact on the amount of system tests.
I want to add, that I usually encourage unit testing!
It is impossible to generalize where unit tests are going to provide cost-benefit and where they are not. I see a lot of people arguing strongly in favor of unit testing and blaming people who don't for not using TDD enough, while completely ignoring the fact that applications can differ as much as the real world does.
For instance, it is incredibly hard to get anything useful out of unit tests when you have a lot of integration points, either between systems, and/or between processes and threads of your own application.
If all you ever did were websites like Stackoverflow, where problem domain is well understood, and most solutions are fairly trivial, then yes, writing unit tests have a lot of benefits, but there are lots of applications out there that simply can't be unit tested properly, as they lack, well, "units".
Laziness; sometimes I'm lazy and just don't want to do it!
But seriously, unit testing is great, but if I'm just coding for my own enjoyment I generally don't do it, because the projects are short lived, I'm the only one working on it, and they're not that big.
There is never a reason to never write unit tests.
There are good reasons to not write specific unit tests. (Especially if you use code generation. Of course you could code generate the unit tests to make sure nobody has mucked with the generated code. But that is dependent upon trusting the team.)
*Edit
Oh. And from what I understand some things in functional programming either compile thus work or don't compile.
Would those things need unit tests?
I agree with the notion that there are no good arguments against unit testing in general. There are some specific situations, however, where unit testing may not be a viable option or is at least problematic and/or poses a difficult return-on-investment proposition for the level of effort involved to create and maintain tests.
Here are some examples:
Real-time-dependent behavior in response to external conditions. Some purists may argue that this is not unit testing but rather involves scenarios at an integration or system testing level. However, I've written code for simple, low-level functionality for quasi-embedded applications where it would be useful to at least partially test real-time response via a unit-testing framework for build/regression testing purposes.
Testing behaviorial and/or policy-level functionality that requires a complex data description of the environmental state to which the tested code module is responding. This is related to an earlier poster's comment regarding the difficulty of doing unit testing involving a data access layer that isn't easily adapted for mocking. Although the behavior/policy being tested may be relatively simple, it needs to be tested across a complex state description. The value of doing unit testing here is in assuring that rare yet key conditions are handled correctly for a mission-critical application. One wants to mock the latter and create a simulated environment/state, but the cost of doing so may be high.
There are at least two alternatives to unit testing for the above scenarios:
For real-time or quasi-real-time functionality testing, extensive system testing can be done to try to compensate for the lack of good unit testing. For some applications this may be the only option, e.g., embedded systems involving hardware and/or physical systems.
Create a test harness or system-level simulator that facilitates extensive testing across a range of randomly simulated conditions. This can be useful for testing the policy/behavior scenarios described earlier involving a complex environmental state. Although significant work may be involved in creating the test harness or simulator, the return-on-investment may be a much better value than for isolated unit tests since a much broader range of conditions can be tested.
Since the test environment involves random rather than specific conditions, this approach may not offer quite the level of assurance desired for some mission-critical scenarios. Conducting extensive tests may help make up for this. Alternatively, creating a test harness or system simulator for random conditions may also help with reducing the overall cost of testing specific complex state scenarios since the development cost is now shared across a broader range of testing needs.
In the end, how to best approach testing any given application comes down to cost vs. value. Unit testing is one of the best options and should always be used where feasible, but it is not universally applicable to all scenarios. Like many things in software, sometimes it will just be a judgment call one has to make and then be prepared to make adjustments based on the outcome.
Unit testing is a trade-off. I see two problems:
It takes time. Not only to write the tests, but also (and this is the most annoying) if you want to make an important change, now you have two places you need to modify. In the worst case it could possibly discourage you to re-architect your codebase.
It only protects against problems that you think could arise, and you mostly cannot test against side-effects. This can lead to a false sense of security as mentioned before.
I agree unit testing is a valuable tool for increasing the reliability of enterprise software with a relatively stable codebase. But for personal projects or infant projects, I think a generous use of asserts in your code is a much better trade-off.
I wouldn't say it's an argument against it, but for us we have a legacy application with a TON of code, and written in COBOL. It is virtually impossible at this point to say we want to implement unit testing and do it with any degree of accuracy or within a reasonable time frame for business as pointed out by duffymo.
So I guess to add onto that, maybe one argument would be the inability (in some cases) of trying to implement unit tests after development has been completed (and maintained for years).
Instead of completely getting rid of them, we write unit tests only for core functionality such as payment authorization, user authentication, etc etc. It is very useful as there will always be some touch points that are very sensitive to code changes in a large code base and you would want some way to verify those touch points work without failing in QA.
For writing unit tests in general, learning curve is the biggest reason I know of to not bother. I have been trying to learn good unit testing for about 1.5 years now, and I feel like I'm just getting good at it (writing audit log spies, mocking, testing 1 constraint per test, etc.), although I feel it has sped up development for me for about 1 year of that time. So call it 6 months of struggling through it before it really started paying off. (I was still doing "real" work during that time, of course.)
Most of the pain experienced during that time was due to failure to follow the guidelines of good unit testing.
For a variety of specific cases, ability to unit test may be blocked; others have commented on some of those.
In Test Driven Development, the unit tests are actually more importantly a way to design your code to be testable to begin with. As it turns out your code tends to be more modular and writing the tests helps to flesh out APIs and so forth.
Too often though, you find yourself developing the code then writing the tests, commenting out the code you just wrote to ensure that the tests fail, then selectively removing the comment tokens to make the tests pass. Why? Well because it's much harder to write tests than it is to write code in some cases. It's also often much more difficult to write code that can be tested in a completely automated way. Think about user interfaces, code that generates images or pdfs, database transactions.
So unit tests do help a lot, but expect to write about 3 times as much code for the tests than you will write for the actual code. Plus all of this will need to be maintained. Significant changes to the application will invalidate huge chunks of tests - a good measure of the impact to the system but still... Are you prepared for that? Are you on a small team where you are doing the job of 5 developers? If so then automated TDD development just won't fly - you won't have time to get stuff done fast enough. So then you end up relying on you're own manual testing, QA testing stuff as well and just living with bugs slipping through and then fixing things up ASAP. It's unfortunate, high pressure and exasperating but it's reality in small companies that don't hire enough developers for the work that needs to be done.
No, really I don't. In my experience people who do are presenting a straw man's argument or just don't know how to unit test things that are not obvious how to unit test.
#Khorkak - If you change a feature in your production code, only a handfull of your unit tests should be affected. If you don't that means that you are not decoupling your production code and testing in isolation, but instead excecising large chunks of your production code in integration. That's just poor unit testing skills, and yes it's a BIG problem. Not only because your unit test code base will become hard to maintain, but also because your production code base will suck and have the same problem.
Unit tests make no sense for Disposable Code: If the code is a Q&D proof of concept, something created during a spike to investigate various approaches, or anything else you are SURE will almost always be thrown away, then doing unit tests won't bring much return on the investment. In fact they could hurt you as the time spent there is time spent not trying a different approach etc (alternative cost)
The key is being sure that's the case, or having enough of an understanding with teammates etc that if someone says 'that's great, use that one' that you then invest the time bring the code up to the standards for NON disposable code.
For those who asked for better proof I'd refer you to this page. http://biblio.gdinwiddie.com/biblio/StudiesOfTestDrivenDevelopment
note that many of these are studies by academic types, but done against groups doing real production software work, so personally it seems like they have a pretty good amount of validity.

What are the pros and cons of automated Unit Tests vs automated Integration tests?

Recently we have been adding automated tests to our existing java applications.
What we have
The majority of these tests are integration tests, which may cover a stack of calls like:-
HTTP post into a servlet
The servlet validates the request and calls the business layer
The business layer does a bunch of stuff via hibernate etc and updates some database tables
The servlet generates some XML, runs this through XSLT to produce response HTML.
We then verify that the servlet responded with the correct XML and that the correct rows exist in the database (our development Oracle instance). These rows are then deleted.
We also have a few smaller unit tests which check single method calls.
These tests are all run as part of our nightly (or adhoc) builds.
The Question
This seems good because we are checking the boundaries of our system: servlet request/response on one end and database on the other. If these work, then we are free to refactor or mess with anything inbetween and have some confidence that the servlet under test continues to work.
What problems are we likely to run into with this approach?
I can't see how adding a bunch more unit tests on individual classes would help. Wouldn't that make it harder to refactor as it's much more likely we will need to throw away and re-write tests?
Unit tests localize failures more tightly. Integration-level tests more closely correspond to user requirements and so are better predictor of delivery success. Neither of them is much good unless built and maintained, but both of them are very valuable if properly used.
(more...)
The thing with units tests is that no integration level test can exercise all the code as much as a good set of unit tests can. Yes, that can mean that you have to refactor the tests somewhat, but in general your tests shouldn't depend on the internals so much. So, lets say for example that you have a single function to get a power of two. You describe it (as a formal methods guy, I'd claim you specify it)
long pow2(int p); // returns 2^p for 0 <= p <= 30
Your test and your spec look essentially the same (this is sort of pseudo-xUnit for illustration):
assertEqual(1073741824,pow2(30);
assertEqual(1, pow2(0));
assertException(domainError, pow2(-1));
assertException(domainError, pow2(31));
Now your implementation can be a for loop with a multiple, and you can come along later and change that to a shift.
If you change the implementation so that, say, it's returning 16 bits (remember that sizeof(long) is only guaranteed to be no less than sizeof(short)) then this tests will fail quickly. An integration-level test should probably fail, but not certainly, and it's just as likely as not to fail somewhere far downstream of the computation of pow2(28).
The point is that they really test for diferent situations. If you could build sufficiently details and extensive integration tests, you might be able to get the same level of coverage and degree of fine-grained testing, but it's probably hard to do at best, and the exponential state-space explosion will defeat you. By partitioning the state space using unit tests, the number of tests you need grows much less than exponentially.
You are asking pros and cons of two different things (what are the pros and cons of riding a horse vs riding a motorcycle?)
Of course both are "automated tests" (~riding) but that doesn't mean that they are alternative (you don't ride a horse for hundreds of miles, and you don't ride a motorcycle in closed-to-vehicle muddy places)
Unit Tests test the smallest unit of the code, usually a method. Each unit test is closely tied to the method it is testing, and if it's well written it's tied (almost) only with that.
They are great to guide the design of new code and the refactoring of existing code. They are great to spot problems long before the system is ready for integration tests. Note that I wrote guide and all the Test Driven Development is about this word.
It does not make any sense to have manual Unit Tests.
What about refactoring, which seems to be your main concern? If you are refactoring just the implementation (content) of a method, but not its existence or "external behavior", the Unit Test is still valid and incredibly useful (you cannot imagine how much useful until you try).
If you are refactoring more aggressively, changing methods existence or behavior, then yes, you need to write a new Unit Test for each new method, and possibly throw away the old one. But writing the Unit Test, especially if you write it before the code itself, will help to clarify the design (i.e. what the method should do, and what it shouldn't) without being confused by the implementation details (i.e. how the method should do the thing that it needs to do).
Automated Integration Tests test the biggest unit of the code, usually the entire application.
They are great to test use cases which you don't want to test by hand. But you can also have manual Integration Tests, and they are as effective (only less convenient).
Starting a new project today, it does not make any sense not to have Unit Tests, but I'd say that for an existing project like yours it does not make too much sense to write them for everything you already have and it's working.
In your case, I'd rather use a "middle ground" approach writing:
smaller Integration Tests which only test the sections you are going to refactor. If you are refactoring the whole thing, then you can use your current Integration Tests, but if you are refactoring only -say- the XML generation, it does not make any sense to require the presence of the database, so I'd write a simple and small XML Integration Test.
a bunch of Unit Tests for the new code you are going to write. As I already wrote above, Unit Tests will be ready as soon as you "mess with anything in between", making sure that your "mess" is going somewhere.
In fact your Integration Test will only make sure that your "mess" is not working (because at the beginning it will not work, right?) but it will not give you any clue on
why it is not working
if your debugging of the "mess" is really fixing something
if your debugging of the "mess" is breaking something else
Integration Tests will only give the confirmation at the end if the whole change was successful (and the answer will be "no" for a long time). The Integration Tests will not give you any help during the refactoring itself, which will make it harder and possibly frustrating. You need Unit Tests for that.
I agree with Charlie about Integration-level tests corresponding more to user actions and the correctness of the system as a whole. I do think there is alot more value to Unit Tests than just localizing failures more tightly though. Unit tests provide two main values over integration tests:
1) Writing unit tests is as much an act of design as testing. If you practice Test Driven Development/Behavior Driven Development the act of writing the unit tests helps you design exactly what you code should do. It helps you write higher quality code (since being loosely coupled helps with testing) and it helps you write just enough code to make your tests pass (since your tests are in effect your specification).
2) The second value of unit tests is that if they are properly written they are very very fast. If I make a change to a class in your project can I run all the corresponding tests to see if I broke anything? How do I know which tests to run? And how long will they take? I can guarantee it will be longer than well written unit tests. You should be able to run all of you unit tests in a couple of minutes at the most.
Just a few examples from personal experience:
Unit Tests:
(+) Keeps testing close to the relevant code
(+) Relatively easy to test all code paths
(+) Easy to see if someone inadvertently changes the behavior of a method
(-) Much harder to write for UI components than for non-GUI
Integration Tests:
(+) It's nice to have nuts and bolts in a project, but integration testing makes sure they fit each other
(-) Harder to localize source of errors
(-) Harder to tests all (or even all critical) code paths
Ideally both are necessary.
Examples:
Unit test: Make sure that input index >= 0 and < length of array. What happens when outside bounds? Should method throw exception or return null?
Integration test: What does the user see when a negative inventory value is input?
The second affects both the UI and the back end. Both sides could work perfectly, and you could still get the wrong answer, because the error condition between the two isn't well-defined.
The best part about Unit testing we've found is that it makes devs go from code->test->think to think->test->code. If a dev has to write the test first, [s]he tends to think more about what could go wrong up front.
To answer your last question, since unit tests live so close to the code and force the dev to think more up front, in practice we've found that we don't tend to refactor the code as much, so less code gets moved around - so tossing and writing new tests constantly doesn't appear to be an issue.
The question has a philisophical part for sure, but also points to pragmatic considerations.
Test driven design used as the means to become a better developer has its merits, but it is not required for that. Many a good programmer exists who never wrote a unit test. The best reason for unit tests is the power they give you when refactoring, especially when many people are changing the source at the same time. Spotting bugs on checkin is also a huge time-saver for a project (consider moving to a CI model and build on checkin instead of nightly). So if you write a unit test, either before or after you written the code it tests, you are sure at that moment about the new code you've written. It is what can happen to that code later that the unit test ensures against - and that can be significant. Unit tests can stop bugs before tehy get to QA, thereby speeding up your projects.
Integration tests stress the interfaces between elements in your stack, if done correctly. In my experience, integration is the most unpredictable part of a project. Getting individual pieces to work tends not to be that hard, but putting everything together can be very difficult because of the types of bugs that can emerge at this step. In many cases, projects are late because of what happens in integration. Some of the errors encountered in this step are found in interfaces that have been broken by some change made on one side that was not communicated to the other side. Another source of integration errors are in configurations discovered in dev but forgotten by the time the app goes to QA. Integration tests can help reduce both types dramatically.
The importance of each test type can be debated, but what will be of most importance to you is the application of either type to your particular situation. Is the app in question being developed by a small group of people or many different groups? Do you have one repository for everything, or many repos each for a particular component of the app? If you have the latter, then you will have challenges with inter compatability of different versions of different components.
Each test type is designed to expose the problems of different levels of integration in the development phase to save time. Unit tests drive the integration of the output many developers operating on one repository. Integration tests (poorly named) drive the integration of components in the stack - components often written by separate teams. The class of problems exposed by integration tests are typically more time-consuming to fix.
So pragmatically, it really boils down to where you most need speed in your own org/process.
The thing that distinguishes Unit tests and Integration tests is the number of parts required for the test to run.
Unit tests (theoretically) require very (or no) other parts to run.
Integration tests (theoretically) require lots (or all) other parts to run.
Integration tests test behaviour AND the infrastructure. Unit tests generally only test behaviour.
So, unit tests are good for testing some stuff, integration tests for other stuff.
So, why unit test?
For instance, it is very hard to test boundary conditions when integration testing. Example: a back end function expects a positive integer or 0, the front end does not allow entry of a negative integer, how do you ensure that the back end function behaves correctly when you pass a negative integer to it? Maybe the correct behaviour is to throw an exception. This is very hard to do with an integration test.
So, for this, you need a unit test (of the function).
Also, unit tests help eliminate problems found during integration tests. In your example above, there are a lot of points of failure for a single HTTP call:
the call from the HTTP client
the servlet validation
the call from the servlet to the business layer
the business layer validation
the database read (hibernate)
the data transformation by the business layer
the database write (hibernate)
the data transformation -> XML
the XSLT transformation -> HTML
the transmission of the HTML -> client
For your integration tests to work, you need ALL of these processes to work correctly. For a Unit test of the servlet validation, you need only one. The servlet validation (which can be independent of everything else). A problem in one layer becomes easier to track down.
You need both Unit tests AND integration tests.
Unit tests execute methods in a class to verify proper input/output without testing the class in the larger context of your application. You might use mocks to simulate dependent classes -- you're doing black box testing of the class as a stand alone entity. Unit tests should be runnable from a developer workstation without any external service or software requirements.
Integration tests will include other components of your application and third party software (your Oracle dev database, for example, or Selenium tests for a webapp). These tests might still be very fast and run as part of a continuous build, but because they inject additional dependencies they also risk injecting new bugs that cause problems for your code but are not caused by your code. Preferably, integration tests are also where you inject real/recorded data and assert that the application stack as a whole is behaving as expected given those inputs.
The question comes down to what kind of bugs you're looking to find and how quickly you hope to find them. Unit tests help to reduce the number of "simple" mistakes while integration tests help you ferret out architectural and integration issues, hopefully simulating the effects of Murphy's Law on your application as a whole.
Joel Spolsky has written very interesting article about unit-testing (it was dialog between Joel and some other guy).
The main idea was that unit tests is very good thing but only if you use them in "limited" quantity. Joel doesn't recommend to achive state when 100% of your code is under testcases.
The problem with unit tests is that when you want to change architecture of your application you'll have to change all corresponding unit tests. And it'll take very much time (maybe even more time than the refactoring itself). And after all that work only few tests will fail.
So, write tests only for code that really can make some troubles.
How I use unit tests: I don't like TDD so I first write code then I test it (using console or browser) just to be sure that this code do nessecary work. And only after that I add "tricky" tests - 50% of them fail after first testing.
It works and it doesn't take much time.
We have 4 different types of tests in our project:
Unit tests with mocking where necessary
DB tests that act similar to unit tests but touch db & clean up afterwards
Our logic is exposed through REST, so we have tests that do HTTP
Webapp tests using WatiN that actually use IE instance and go over major functionality
I like unit tests. They run really fast (100-1000x faster than #4 tests). They are type safe, so refactoring is quite easy (with good IDE).
Main problem is how much work is required to do them properly. You have to mock everything: Db access, network access, other components. You have to decorate unmockable classes, getting a zillion mostly useless classes. You have to use DI so that your components are not tightly coupled and therefore not testable (note that using DI is not actually a downside :)
I like tests #2. They do use the database and will report database errors, constraint violations and invalid columns. I think we get valuable testing using this.
#3 and especially #4 are more problematic. They require some subset of production environment on build server. You have to build, deploy and have the app running. You have to have a clean DB every time. But in the end, it pays off. Watin tests require constant work, but you also get constant testing. We run tests on every commit and it is very easy to see when we break something.
So, back to your question. Unit tests are fast (which is very important, build time should be less than, say, 10 minutes) and the are easy to refactor. Much easier than rewriting whole watin thing if your design changes. If you use a nice editor with good find usages command (e.g. IDEA or VS.NET + Resharper), you can always find where your code is being tested.
With REST/HTTP tests, you get a good a good validation that your system actually works. But tests are slow to run, so it is hard to have a complete validation at this level. I assume your methods accept multiple parametres or possibly XML input. To check each node in XML or each parameter, it would take tens or hundreds of calls. You can do that with unit tests, but you cannot do that with REST calls, when each can take a big fraction of a second.
Our unit tests check special boundary conditions far more often than #3 tests. They (#3) check that main functionality is working and that's it. This seems to work pretty well for us.
As many have mentioned, integration tests will tell you whether your system works, and unit tests will tell you where it doesn't. Strictly from a testing perspective, these two kinds of tests complement each other.
I can't see how adding a bunch more
unit tests on individual classes would
help. Wouldn't that make it harder to
refactor as it's much more likely we
will need to throw away and re-write
tests?
No. It will make refactoring easier and better, and make it clearer to see what refactorings are appropriate and relevant. This is why we say that TDD is about design, not about testing. It's quite common for me to write a test for one method and in figuring out how to express what that method's result should be to come up with a very simple implementation in terms of some other method of the class under test. That implementation frequently finds its way into the class under test. Simpler, more solid implementations, cleaner boundaries, smaller methods: TDD - unit tests, specifically - lead you in this direction, and integration tests do not. They're both important, both useful, but they serve different purposes.
Yes, you may find yourself modifying and deleting unit tests on occasion to accommodate refactorings; that's fine, but it's not hard. And having those unit tests - and going through the experience of writing them - gives you better insight into your code, and better design.
Although the setup you described sounds good, unit testing also offers something important. Unit testing offers fine levels of granularity. With loose coupling and dependency injection, you can pretty much test every important case. You can be sure that the units are robust; you can scrutinise individual methods with scores of inputs or interesting things that don't necessarily occur during your integration tests.
E.g. if you want to deterministically see how a class will handle some sort of failure that would require a tricky setup (e.g. network exception when retrieving something from a server) you can easily write your own test double network connection class, inject it and tell it to throw an exception whenever you feel like it. You can then make sure that the class under test gracefully handles the exception and carries on in a valid state.
You might be interested in this question and the related answers too. There you can find my addition to the answers that were already given here.

If a project has 100% unit test coverage, are integration tests still needed?

If a project has 100% unit test coverage, are integration tests still needed?
I have never worked on a project with 100% unit test coverage, but I'm wondering if your project obtains this (or in the 90%), was your experience that you still needed integration tests? (did you need less?)
I ask because integration tests seem to suck. They are often slow, fragile (break easily), opaque (when broken someone has to dive through all the layers to find out what is wrong) and are causing our project to slow way down... I'm beginning to think that having only unit tests (and perhaps a small handful of smoke tests) is the way to go.
In the long run, it seems like integration tests (in my experience) cost more than they save.
Thanks for your consideration.
Definitions
I think it's important to define your terms before having this discussion.
Unit test tests a single unit in isolation. For me, that's a class. A unit test will create an object, invoke a method, and check a result. It answers the question "does my code do what I intended it to do?"
Integration test tests the combination of two components in the system. It is focused on the relationship between the components, not the components themselves. It answers the question "do these components work together as intended".
System test tests the whole software system. It answers the question "does this software work as intended?"
Acceptance test is an automated way for the customer answer the question "is this software what I think I want?". It is a kind of system test.
Note that none of these tests answer questions like "is this software useful?" or "is this software easy to use?".
All automated tests are limited by axiom "End-to-end is further than you think" - eventually a human has to sit down in front of a computer and look at your user interface.
Comparisons
Unit tests are faster and easier to write, faster to run, and easier to diagnose. They don't depend on "external" elements like a file system or a database, so they are much simpler/faster/reliable. Most unit tests continue to work as you refactor (and good unit tests are the only way to refactor safely). They absolutely require that your code be decoupled, which is hard, unless you write the test first. This combination of factors makes the Red/Green/Refactor sequence of TDD work so well.
System tests are hard to write, because they have to go through so much setup to get to a specific situation that you want to test. They are brittle, because any change in the behavior of the software before can affect the sequence leading up to the situation you want to test, even if that behavior isn't relevant to the test. They are dramatically slower than unit tests for similar reasons. Failures can be very difficult to diagnose, both because it can take a long time to get to the point of failure, and because so much software is involved in the failure. In some software, system tests are very difficult to automate.
Integration tests sit in between: they are easier to write, run, and diagnose than system tests, but with broader coverage than unit tests.
Recommendation
Use a combination of testing strategies to balance the costs and values of each.
Yes.
Even if all "units" do what they are supposed to do, it is no guarantee that the complete system works as designed.
Yes, besides there are a few different types of code coverage
from wiki:
Function coverage - Has each function in the program been executed?
Statement coverage - Has each line of the source code been executed?
Decision coverage (also known as Branch coverage) - Has each control structure (such as an if statement) evaluated both to true and false?
Condition coverage - Has each boolean sub-expression evaluated both to true and false (this does not necessarily imply decision coverage)?
Modified Condition/Decision Coverage (MC/DC) - Has every condition in a decision taken on all possible outcomes at least once? Has each condition been shown to affect that decision outcome independently?
Path coverage - Has every possible route through a given part of the code been executed?
Entry/exit coverage - Has every possible call and return of the function been executed?
Path coverage for example, just because every method has been called, doesn't mean that errors wont occur if you call various methods in a given order.
First, 100% unit test coverage is not enough even at unit testing level: you cover only 100% of the instructions of your code. What about paths in your code? What about input or output domains?
Second, you don't know whether output from a sender unit is compatible with input from its receiver unit. This is the purpose of integration testing.
Finally, unit testing may be performed on a different environment than production. Integration testing may reveal discrepancies.
You can only prove the presence of a bug using tests/coverage, but you can never prove that the code is bug-free using tests/coverage. This fact indicates the boundaries of testing/coverage. This is the same in mathematics, you can disprove a theorem by finding a counter example, but you can never prove a theorem by not finding a counter example. So testing and coverage are only a substitute for correctness proofs, which are so difficult to do that they are almost never used. Testing and coverage can improve quality of the code, but there is no guarantee. It remains a craft an not a science.
I've not really seen an answer that covers these considerations. Now, I'm speaking from a holistic systems perspective, not form a SW development perspective, but...
Integration is basically the process of combining lower level products into a higher level product. Each level has its own set of requirements to comply with. Although it is possible that some requirements are the same, the overall requirements set will be different for different levels. This means that test objectives are different at different levels.
Also, the environment of the environment of the higher level product tends to be different from that of the lower level product (e.g. SW module testing may occur on a desktop environment, whereas a complete loadable SW item may be tested when loaded in its HW component).
Furthermore, lower level component developers may not have the same understanding of the `requirements and design as the higher level product developers, so integration testing also validates to a certain extend the lower level product development.
Unit tests are different from integration tests.
Just to make a point: if I have to choose, I would dump unit tests and go with integration tests. Experience tells that unit tests help to ensure functionality and also find bugs early in the development cycle.
Integration testing is done with product looking close to what it would look to end users. That is important too.
Unit tests are generally all about testing your class in isolation. They should be designed to ensure that given specific inputs your class exhibits predictable and expected behaviors.
Integration tests are generally all about testing your classes in combinations with each other and with "outside" programs using those classes. They should focus on ensuring that when the overall product uses your classes it is doing so in the correct manner.
"opaque (when broken someone has to dive through all the layers to find out what is wrong)" -- this is exactly why integration tests are done - otherwise those opaque issues would show up in production environment.
Yes because the functionality of your software depends on how it's different piece interact. Unit Tests depend on you coming with the inputs and defining the expected output. Doing this doesn't guarantee that it will work with the rest of your system.
Yes integration testing is a pain to deal with when you introduce code changes that deliberately changes the output. With our software we minimize by this by focusing on comparing the save result of a integration test with a saved correct result.
We have a tool that can use when we are sure that we are producing the correct results. It goes and loads up the old saved correct results and modifies them to work with the new setup.
I routinely see all sorts of issues uncovered by good integration testing - especially if you can automate some of your integration testing.
Unit tests are great, but you can accomplish 100% code coverage without 100% relevancy in your unit tests. You're really trying to test different things, right? In unit tests, you're looking for edge cases for a specific function, usually, whereas integration testing is going to show you problems at a higher level as all these functions interact.
If you build an API into your software, you can use this for automated integration testing - I've gotten a lot of mileage out of this in the past. I don't know that I'd go as far as to say that I'd dump unit tests in favor of integration tests, but when they're done right, they're a really powerful addition.
This exact question was basically just asked a day ago. See this question for lots of the errors you could run into even with 100% code coverage.
It doesn't look like it was mentioned here, but you can never actually have 100% unit test coverage (if you have a database involved). The moment you write a unit test for database connectivity and CRUD operations, you've just created an integration test. The reason is because your test now has a dependency outside of the individual units of work. The projects I've worked on, and the developers I've spoken with, have always indicated that the remaining 10% is the DAO or service layer. The best way to test that is with integration tests and a mock (in-memory) database. I've seen attempts to mock connections in order to unit test the DAO, but I don't really see the point -- your DAO is just a way to serialize raw data from one format to another, and your manager or delegate will decide how to manipulate it.

Tricks for writing better unit tests [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are some of the tricks or tools or policies (besides having a unit testing standard) that you guys are using to write better unit tests? By better I mean 'covers as much of your code in as few tests as possible'. I'm talking about stuff that you have used and saw your unit tests improve by leaps and bounds.
As an example I was trying out Pex the other day and I thought it was really really good. There were tests I was missing out and Pex easily showed me where. Unfortunately it has a rather restrictive license.
So what are some of the other great stuff you guys are using/doing?
EDIT: Lots of good answers. I'll be marking as correct the answer that I'm currently not practicing but will definitely try and that hopefully gives the best gains. Thanks to all.
Write many tests per method.
Test the smallest thing possible. Then test the next smallest thing.
Test all reasonable input and output ranges. IOW: If your method returns boolean, make sure to test the false and true returns. For int? -1,0,1,n,n+1 (proof by mathematical induction). Don't forget to check for all Exceptions (assuming Java).
4a. Write an abstract interface first.
4b. Write your tests second.
4c. Write your implementation last.
Use Dependency Injection. (for Java: Guice - supposedly better, Spring - probably good enough)
Mock your "Unit's" collaborators with a good toolkit like mockito (assuming Java, again).
Google much.
Keep banging away at it. (It took me 2 years - without much help but for google - to start "getting it".)
Read a good book about the topic.
Rinse, repeat...
Write tests before you write the code (ie: Test Driven Development). If for some reason you are unable to write tests before, write them as you write the code. Make sure that all the tests fail initially. Then, go down the list and fix each broken one in sequence. This approach will lead to better code and better tests.
If you have time on your side, then you may even consider writing the tests, forgetting about it for a week, and then writing the actual code. This way you have taken a step away from the problem and can see the problem more clearly now. Our brains process tasks differently if they come from external or internal sources and this break makes it an external source.
And after that, don't worry about it too much. Unit tests offer you a sanity check and stable ground to stand on -- that's all.
On my current project we use a little generation tool to produce skeleton unit tests for various entities and accessors, it provides a fairly consistent approach for each modular unit of work which needs to be tested, and creates a great place for developers to test out their implementations from (i.e the unit test class is added when the rest of the entities and other dependencies are added by default).
The structure of the (templated) tests follows a fairly predictable syntax, and the template allows for implementation of module/object-specific buildup/tear down (we also use a base class for all the tests to encapsule some logic).
We also create instances of entities (and assign test data values) in static functions so that objects can be created programatically and used within different test scenarios and across test classes, whcih is proving to be very helpful.
Read a book like The Art of Unit Testing will definitely help.
As far as policy goes read Kent Beck's answer on SO, particularly:
to test as little as possible to reach a given level of confidence
Write pragmatic unit tests for tricky parts of your code and don't lose site of the fact that it's the program you are testing that's important not the unit tests.
I have a ruby script that generates test stubs for "brown" code that wasnt built with TDD. It writes my build script, sets up includes/usings and writes a setup/teardown to instantiate the test class in the stub. Helps to start with a consistent starting point without all the typing tedium when I hack at code written in the Dark Times.
One practice I've found very helpful is the idea of making your test suite isomorphic to the code being tested. That means that the tests are arranged in the same order as the lines of code they are testing. This makes it very easy to take a piece of code and the test suite for that code, look at them side-by-side and step through each line of code to verify there is an appropriate test. I have also found that the mere act of enforcing isomorphism like this forces me to think carefully about the code being tested, such as ensuring that all the possible branches in the code are exercised by tests, or that all the loop conditions are tested.
For example, given code like this:
void MyClass::UpdateCacheInfo(
CacheInfo *info)
{
if (mCacheInfo == info) {
return;
}
info->incrRefCount();
mCacheInfo->decrRefCount();
mCacheInfo = info
}
The test suite for this function would have the following tests, in order:
test UpdateCacheInfo_identical_info
test UpdateCacheInfo_increment_new_info_ref_count
test UpdateCacheInfo_decrement_old_info_ref_count
test UpdateCacheInfo_update_mCacheInfo

What Makes a Good Unit Test? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I'm sure most of you are writing lots of automated tests and that you also have run into some common pitfalls when unit testing.
My question is do you follow any rules of conduct for writing tests in order to avoid problems in the future? To be more specific: What are the properties of good unit tests or how do you write your tests?
Language agnostic suggestions are encouraged.
Let me begin by plugging sources - Pragmatic Unit Testing in Java with JUnit (There's a version with C#-Nunit too.. but I have this one.. its agnostic for the most part. Recommended.)
Good Tests should be A TRIP (The acronymn isn't sticky enough - I have a printout of the cheatsheet in the book that I had to pull out to make sure I got this right..)
Automatic : Invoking of tests as well as checking results for PASS/FAIL should be automatic
Thorough: Coverage; Although bugs tend to cluster around certain regions in the code, ensure that you test all key paths and scenarios.. Use tools if you must to know untested regions
Repeatable: Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params.
Independent: Very important.
Tests should test only one thing at a time. Multiple assertions are okay as long as they are all testing one feature/behavior. When a test fails, it should pinpoint the location of the problem.
Tests should not rely on each other - Isolated. No assumptions about order of test execution. Ensure 'clean slate' before each test by using setup/teardown appropriately
Professional: In the long run you'll have as much test code as production (if not more), therefore follow the same standard of good-design for your test code. Well factored methods-classes with intention-revealing names, No duplication, tests with good names, etc.
Good tests also run Fast. any test that takes over half a second to run.. needs to be worked upon. The longer the test suite takes for a run.. the less frequently it will be run. The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit.
Update 2010-08:
Readable : This can be considered part of Professional - however it can't be stressed enough. An acid test would be to find someone who isn't part of your team and asking him/her to figure out the behavior under test within a couple of minutes. Tests need to be maintained just like production code - so make it easy to read even if it takes more effort. Tests should be symmetric (follow a pattern) and concise (test one behavior at a time). Use a consistent naming convention (e.g. the TestDox style). Avoid cluttering the test with "incidental details".. become a minimalist.
Apart from these, most of the others are guidelines that cut down on low-benefit work: e.g. 'Don't test code that you don't own' (e.g. third-party DLLs). Don't go about testing getters and setters. Keep an eye on cost-to-benefit ratio or defect probability.
Don't write ginormous tests. As the 'unit' in 'unit test' suggests, make each one as atomic and isolated as possible. If you must, create preconditions using mock objects, rather than recreating too much of the typical user environment manually.
Don't test things that obviously work. Avoid testing the classes from a third-party vendor, especially the one supplying the core APIs of the framework you code in. E.g., don't test adding an item to the vendor's Hashtable class.
Consider using a code coverage tool such as NCover to help discover edge cases you have yet to test.
Try writing the test before the implementation. Think of the test as more of a specification that your implementation will adhere to. Cf. also behavior-driven development, a more specific branch of test-driven development.
Be consistent. If you only write tests for some of your code, it's hardly useful. If you work in a team, and some or all of the others don't write tests, it's not very useful either. Convince yourself and everyone else of the importance (and time-saving properties) of testing, or don't bother.
Most of the answers here seem to address unit testing best practices in general (when, where, why and what), rather than actually writing the tests themselves (how). Since the question seemed pretty specific on the "how" part, I thought I'd post this, taken from a "brown bag" presentation that I conducted at my company.
Womp's 5 Laws of Writing Tests:
1. Use long, descriptive test method names.
- Map_DefaultConstructorShouldCreateEmptyGisMap()
- ShouldAlwaysDelegateXMLCorrectlyToTheCustomHandlers()
- Dog_Object_Should_Eat_Homework_Object_When_Hungry()
2. Write your tests in an Arrange/Act/Assert style.
While this organizational strategy
has been around for a while and
called many things, the introduction
of the "AAA" acronym recently has
been a great way to get this across.
Making all your tests consistent with
AAA style makes them easy to read and
maintain.
3. Always provide a failure message with your Asserts.
Assert.That(x == 2 && y == 2, "An incorrect number of begin/end element
processing events was raised by the XElementSerializer");
A simple yet rewarding practice that makes it obvious in your runner application what has failed. If you don't provide a message, you'll usually get something like "Expected true, was false" in your failure output, which makes you have to actually go read the test to find out what's wrong.
4. Comment the reason for the test – what’s the business assumption?
/// A layer cannot be constructed with a null gisLayer, as every function
/// in the Layer class assumes that a valid gisLayer is present.
[Test]
public void ShouldNotAllowConstructionWithANullGisLayer()
{
}
This may seem obvious, but this
practice will protect the integrity
of your tests from people who don't
understand the reason behind the test
in the first place. I've seen many
tests get removed or modified that
were perfectly fine, simply because
the person didn't understand the
assumptions that the test was
verifying.
If the test is trivial or the method
name is sufficiently descriptive, it
can be permissible to leave the
comment off.
5. Every test must always revert the state of any resource it touches
Use mocks where possible to avoid
dealing with real resources.
Cleanup must be done at the test
level. Tests must not have any
reliance on order of execution.
Keep these goals in mind (adapted from the book xUnit Test Patterns by Meszaros)
Tests should reduce risk, not
introduce it.
Tests should be easy to run.
Tests should be easy to maintain as
the system evolves around them
Some things to make this easier:
Tests should only fail because of
one reason.
Tests should only test one thing
Minimize test dependencies (no
dependencies on databases, files, ui
etc.)
Don't forget that you can do intergration testing with your xUnit framework too but keep intergration tests and unit tests separate
Tests should be isolated. One test should not depend on another. Even further, a test should not rely on external systems. In other words, test your code, not the code your code depends on.You can test those interactions as part of your integration or functional tests.
Some properties of great unit tests:
When a test fails, it should be immediately obvious where the problem lies. If you have to use the debugger to track down the problem, then your tests aren't granular enough. Having exactly one assertion per test helps here.
When you refactor, no tests should fail.
Tests should run so fast that you never hesitate to run them.
All tests should pass always; no non-deterministic results.
Unit tests should be well-factored, just like your production code.
#Alotor: If you're suggesting that a library should only have unit tests at its external API, I disagree. I want unit tests for each class, including classes that I don't expose to external callers. (However, if I feel the need to write tests for private methods, then I need to refactor.)
EDIT: There was a comment about duplication caused by "one assertion per test". Specifically, if you have some code to set up a scenario, and then want to make multiple assertions about it, but only have one assertion per test, you might duplication the setup across multiple tests.
I don't take that approach. Instead, I use test fixtures per scenario. Here's a rough example:
[TestFixture]
public class StackTests
{
[TestFixture]
public class EmptyTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
}
[TestMethod]
[ExpectedException (typeof(Exception))]
public void PopFails()
{
_stack.Pop();
}
[TestMethod]
public void IsEmpty()
{
Assert(_stack.IsEmpty());
}
}
[TestFixture]
public class PushedOneTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
_stack.Push(7);
}
// Tests for one item on the stack...
}
}
What you're after is delineation of the behaviours of the class under test.
Verification of expected behaviours.
Verification of error cases.
Coverage of all code paths within the class.
Exercising all member functions within the class.
The basic intent is increase your confidence in the behaviour of the class.
This is especially useful when looking at refactoring your code. Martin Fowler has an interesting article regarding testing over at his web site.
HTH.
cheers,
Rob
Test should originally fail. Then you should write the code that makes them pass, otherwise you run the risk of writing a test that is bugged and always passes.
I like the Right BICEP acronym from the aforementioned Pragmatic Unit Testing book:
Right: Are the results right?
B: Are all the boundary conditions correct?
I: Can we check inverse relationships?
C: Can we cross-check results using other means?
E: Can we force error conditions to happen?
P: Are performance characteristics within bounds?
Personally I feel that you can get pretty far by checking that you get the right results (1+1 should return 2 in a addition function), trying out all the boundary conditions you can think of (such as using two numbers of which the sum is greater than the integer max value in the add function) and forcing error conditions such as network failures.
Good tests need to be maintainable.
I haven't quite figured out how to do this for complex environments.
All the textbooks start to come unglued as your code base starts reaching
into the hundreds of 1000's or millions of lines of code.
Team interactions explode
number of test cases explode
interactions between components explodes.
time to build all the unittests becomes a significant part of the build time
an API change can ripple to hundreds of test cases. Even though the production code change was easy.
the number of events required to sequence processes into the right state increases which in turn increases test execution time.
Good architecture can control some of interaction explosion, but inevitably as
systems become more complex the automated testing system grows with it.
This is where you start having to deal with trade-offs:
only test external API otherwise refactoring internals results in significant test case rework.
setup and teardown of each test gets more complicated as an encapsulated subsystem retains more state.
nightly compilation and automated test execution grows to hours.
increased compilation and execution times means designers don't or won't run all the tests
to reduce test execution times you consider sequencing tests to take reduce set up and teardown
You also need to decide:
where do you store test cases in your code base?
how do you document your test cases?
can test fixtures be re-used to save test case maintenance?
what happens when a nightly test case execution fails? Who does the triage?
How do you maintain the mock objects? If you have 20 modules all using their own flavor of a mock logging API, changing the API ripples quickly. Not only do the test cases change but the 20 mock objects change. Those 20 modules were written over several years by many different teams. Its a classic re-use problem.
individuals and their teams understand the value of automated tests they just don't like how the other team is doing it. :-)
I could go on forever, but my point is that:
Tests need to be maintainable.
I covered these principles a while back in This MSDN Magazine article which I think is important for any developer to read.
The way I define "good" unit tests, is if they posses the following three properties:
They are readable (naming, asserts, variables, length, complexity..)
They are Maintainable (no logic, not over specified, state-based, refactored..)
They are trust-worthy (test the right thing, isolated, not integration tests..)
Unit Testing just tests the external API of your Unit, you shouldn't test internal behaviour.
Each test of a TestCase should test one (and only one) method inside this API.
Aditional Test Cases should be included for failure cases.
Test the coverage of your tests: Once a unit it's tested, the 100% of the lines inside this unit should had been executed.
Jay Fields has a lot of good advices about writing unit tests and there is a post where he summarize the most important advices. There you will read that you should critically think about your context and judge if the advice is worth to you. You get a ton of amazing answers here, but is up to you decide which is best for your context. Try them and just refactoring if it smells bad to you.
Kind Regards
Never assume that a trivial 2 line method will work. Writing a quick unit test is the only way to prevent the missing null test, misplaced minus sign and/or subtle scoping error from biting you, inevitably when you have even less time to deal with it than now.
I second the "A TRIP" answer, except that tests SHOULD rely on each other!!!
Why?
DRY - Dont Repeat Yourself - applies to testing as well! Test dependencies can help to 1) save setup time, 2) save fixture resources, and 3) pinpoint to failures. Of course, only given that your testing framework supports first-class dependencies. Otherwise, I admit, they are bad.
Follow up http://www.iam.unibe.ch/~scg/Research/JExample/
Often unit tests are based on mock object or mock data.
I like to write three kind of unit tests:
"transient" unit tests: they create their own mock objects/data and test their function with it, but destroy everything and leave no trace (like no data in a test database)
"persistent" unit test: they test functions within your code creating objects/data that will be needed by more advanced function later on for their own unit test (avoiding for those advanced function to recreate every time their own set of mock objects/data)
"persistent-based" unit tests: unit tests using mock objects/data that are already there (because created in another unit test session) by the persistent unit tests.
The point is to avoid to replay everything in order to be able to test every functions.
I run the third kind very often because all mock objects/data are already there.
I run the second kind whenever my model change.
I run the first one to check the very basic functions once in a while, to check to basic regressions.
Think about the 2 types of testing and treat them differently - functional testing and performance testing.
Use different inputs and metrics for each. You may need to use different software for each type of test.
I use a consistent test naming convention described by Roy Osherove's Unit Test Naming standards Each method in a given test case class has the following naming style MethodUnderTest_Scenario_ExpectedResult.
The first test name section is the name of the method in the system under test.
Next is the specific scenario that is being tested.
Finally is the results of that scenario.
Each section uses Upper Camel Case and is delimited by a under score.
I have found this useful when I run the test the test are grouped by the name of the method under test. And have a convention allows other developers to understand the test intent.
I also append parameters to the Method name if the method under test have been overloaded.