Testing When Correctness is Poorly Defined? - unit-testing

I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions.
However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic.
Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

I think you just need to write unit tests based on small sets of data that will make sure that your code is doing exactly what you want it to do. If this gives you a reasonable data-mining algorithm is a separate issue, and I don't think it is possible to solve it by unit tests. There are two "levels" of correctness of your code:
Your code is correctly implementing the given data mining algorithm (this thing you should unit-test)
The data mining algorithm you implement is "correct" - solves the business problem. This is a quite open question, it probably depends both on some parameters of your algorithm as well as on the actual data (different algorithms work for different types of data).

When facing cases like this I tend to build one or more stub data sets that reflect the proper underlying complexities of the real-life data. I often do this together with the customer, to make sure I capture the essence of the complexities.
Then I can just codify these into one or more datasets that can be used as basis for making very specific unit tests (sometimes they're more like integration tests with stub data, but I don't think that's an important distinction). So while your algorithm may have "fuzzy" results for a "generic" dataset, these algorithms almost always have a single correct answer for a specific dataset.

Well, there are a few answers.
First of all, as you mentioned, take a small case study, and do the math by hand. Since you wrote the algorithm, you know what it's supposed to do, so you can do it in a limited case.
The other one is to break down every component of your program into testable parts.
If A calls B calls C calls D, and you know that A,B,C,D, all give the right answer, then you test A->B, B->C, and C->D, then you can be reasonably sure that A->D is giving the correct response.
Also, if there are other programs out there that do what you are looking to do, try and aquire their datasets. Or an opensource project that you could use test data against, and see if your application is giving similar results.
Another way to test datamining code is by taking a test set, and then introducing a pattern of the type you're looking for, and then test again, to see if it will separate out the new pattern from the old ones.
And, the tried and true, walk through your own code by hand and see if the code is doing what you meant it to do.

Really, the challenge here is this: because your application is meant to do a fuzzy, non-deterministic kind of task in a smart way, the very goal you hope to achieve is that the application becomes better than human beings at finding these patterns. That's great, powerful, and cool ... but if you pull it off, then it becomes very hard for any human beings to say, "In this case, the answer should be X."
In fact, ideally the computer would say, "Not really. I see why you think that, but consider these 4.2 terabytes of information over here. Have you read them yet? Based on those, I would argue that the answer should be Z."
And if you really succeeded in your original goal, the end user might sometimes say, "Zowie, you're right. That is a better answer. You found a pattern that is going to make us money! (or save us money, or whatever)."
If such a thing could never happen, then why are you asking the computer to detect these kinds of patterns in the first place?
So, the best thing I can think of is to let real life help you build up a list of test scenarios. If there ever was a pattern discovered in the past that did turn out to be valuable, then make a "unit test" that sees if your system discovers it when given similar data. I say "unit test" in quotes because it may be more like an integration test, but you may still choose to use NUnit or VS.Net or RSpec or whatever unit test tools you're using.
For some of these tests, you might somehow try to "mock" the 4.2 terabytes of data (you won't really mock the data, but at some higher level you'd mock some of the conclusions reached from that data). For others, maybe you have a "test database" with some data in it, from which you expect a set of patterns to be detected.
Also, if you can do it, it would be great if the system could "describe its reasoning" behind the patterns it detects. This would let the business user deliberate over the question of whether the application was right or not.

This is tricky. This sounds similar to writing tests around our text search engine. If you keep struggling, you'll figure something out:
Start with a small, simplified but reasonably representative data sample, and test basic behavior doing this
Rather than asserting that the output is exactly some answer, sometimes it's better to figure out what is important about it. For example, for our search engine, I didn't care so much about the exact order the documents were listed, as long as the three key ones were on the first page of results.
As you make a small, incremental change, figure out what the essence of it is and write a test for that. Even though the overall calculations take many inputs, individual changes to the codebase should be isolatable. For example, we found certain documents weren't being surfaced because of the presence of hyphens in some of the key words. We created tests that testing that this was behaving how we expected.
Look at tools like Fitness, which allow you to throw a large number of datasets at a piece of code and assert things about the results. This may be easier to understand than more traditional unit tests.
I've gone back to the product owner, saying "I can't understand how this will work. How will we know if it's right?" Maybe s/he can articulate the essence of the vaguely defined problem. This has worked really well for me many times, and I've talked people out of features because they couldn't be explained.
Be creative!

Ultimately, you have to decide what your program should be doing, and then test for that.

Related

When doing TDD, why should I do "just enough" to get a test passing?

Looking at posts like this and others, it seems that the correct way to do TDD is to write a test for a feature, get just that feature to pass, and then add another test and refactor as necessary until it passes, then repeat.
My question is: why is this approach used? I completely understand the write tests first idea, because it helps your design. But why wouldn't I create all tests for a specific function, and then implement that function all at once until all tests pass?
The approach comes from the Extreme Programming principal of You Aren't Going to Need It. If you actually write a single test and then the code that makes it pass then repeating that process you usually find that you write just enough to get things working. You don't invent new features that are not needed. You don't handle corner cases that don't exist.
Try an experiment. Write out the list of tests you think you need. Set it aside. Then go with the one test at a time approach. See if the lists differ and why. When I do that I almost always end up with fewer tests. I almost always find that I invented a case that I didn't need if I do it the all the tests first way.
For me, it is about "thought burden." If I have all of the possible behaviors to worry about at once, my brain is strained. If I approach them one at a time, I can give full attention to solving the immediate problem.
I believe this derives from the principle of "YAGNI" ("You're Ain't Gonna Need It")(*), which states that classes should be as simple as necessary, with no extra features. Hence when you need a feature, you write a test for it, then you write the feature, then you stop. If you wrote a number of tests first, clearly you would be merely speculating on what your API would need to be at some point in the future.
(*) I generally translate that as "You are too stupid to know what will be needed in the future", but that's another topic......
imho it reduces the chance of over engineering the piece of code you are writing.
Its just easier to add unnecessary code when you are looking at different usage scenarios.
Dan North has suggested that there is no such thing as test-driven design because the design is not really driven out by testing -- that these unit tests only become tests once functionality is implemented, but during the design phase you are really designing by example.
This makes sense -- your tests are setting up a range of sample data and conditions with which the system under test is going to operate, and you drive out design based on these example scenarios.
Some of the other answers suggest that this is based on YAGNI. This is partly true.
Beyond that, though, there is the issue of complexity. As is often stated, programming is about managing complexity -- breaking things down into comprehensible units.
If you write 10 tests to cover cases where param1 is null, param2 is null, string1 is empty, int1 is negative, and the current day of the week is a weekend, and then go to implement that, you are having to juggle a lot of complexity at once. This opens up space to introduce bugs, and it becomes very difficult to sort out why tests are failing.
On the other hand, if you write the first test to cover an empty string1, you barely have to think about the implementation. Once the test is passing, you move on to a case where the current day is a weekend. You look at the existing code and it becomes obvious where the logic should go. You run tests and if the first test is now failing, you know that you broke it while implementing the day-of-the-week thing. I'd even recommend that you commit source between tests so that if you break something you can always revert to a passing state and try again.
Doing just a little at a time and then verifying that it works dramatically reduces the space for the introduction of defects, and when your tests fail after implementation you have changed so little code that it is very easy to identify the defect and correct it, because you know that the existing code was already working properly.
This is a great question. You need to find a balance between writing all tests in the universe of possible tests, and the most likely user scenarios. One test is, IMHO, not enough, and I typically like to write 3 or 4 tests which represent the most common uses of the feature. I also like to write a best case test and a worst case test as well.
Writing many tests helps you to anticipate and understand the potential use of your feature.
I believe TDD advocates writing one test at a time because it forces you to think in terms of the principle of doing the simplest thing that could possibly work at each step of development.
I think the article you sent is exactly the answer. If you write all the tests first and all of the scenarios first, you will probably write your code to handle all of those scenarios at once and most of the time you probably end up with code that is fairly complex to handle all of these.
On the other hand, if you go one at a time, you will end up refactoring your existing code each time to end up with code probably as simple as it can be for all the scenarios.
Like in the case of the link you gave in your question, had they written all the tests first, I am pretty sure they would have not ended up with a simple if/else statement, but probably a fairly complex recursive piece of code.
The reason behind the principle is simple. How practical it is to stick to is a separate question.
The reason is that if you are writing more code that what is needed to pass the current test you are writing code that is, by definition, untested. (It's nothing to do with YAGNI.)
If you write the next test to "catch up" with the production code then you've just written a test that you haven't seen fail. The test may be called "TestNextFeature" but it may as well return true for all the evidence you have on it.
TDD is all about making sure that all code - production and tests - is tested and that all those pesky "but I'm sure I wrote it right" bugs don't get into the code.
I would do as you suggest. Write several tests for a specific function, implement the function, and ensure that all of the tests for this function pass. This ensures that you understand the purpose and usage of the function separately from your implementation of it.
If you need to do a lot more implementation wise than what is tested by your unit tests, then your unit tests are likely not comprehensive enough.
I think part of that idea is to keep simplicity, keep to designed/planned features, and make sure that your tests are sufficient.
Lots of good answers above - YAGNI is the first answer that jumps to mind.
The other important thing about the 'just get the test passing' guideline though, is that TDD is actually a three stage process:
Red > Green > Refactor
Frequently revisiting the final part, the refactoring, is where a lot of the value of TDD is delivered in terms of cleaner code, better API design, and more confidence in the software. You need to refactor in really small short blocks though lest the task become too big.
It is hard to get into this habit, but stick with it, as it's an oddly satisfying way to work once you get into the cycle.

Any advice for a developer given the task of enhancing & refactoring a business critical application?

Recently I inherited a business critical project at work to "enhance". The code has been worked on and passed through many hands over the past five years. Consultants and full-time employees who are no longer with the company have butchered this very delicate and overly sensitive application. Most of us have to deal with legacy code or this type of project... its part of being a developer... but...
There are zero units and zero system tests. Logic is inter-mingled (and sometimes duplicated for no reason) between stored procedures, views (yes, I said views) and code. Documentation? Yeah, right.
I am scared. Yes, very sacred to make even the most minimal of "tweak" or refactor. One little mishap, and there would be major income loss and potential legal issues for my employer.
So, any advice? My first thought would be to begin writing assertions/unit tests against the existing code. However, that can only go so far because there is a lot of logic embedded in stored procedures. (I know its possible to test stored procedures, but historically its much more difficult compared to unit testing source code logic).
Another or additional approach would be to compare the database state before and after the application has performed a function, make some code changes, then do database state compare.
I just rewrote thousands of lines of the most complex subsystem of an enterprise filesystem to make it multi-threaded, so all of this comes from experience. If the rewrite is justified (it is if the rewrite is being done to significantly enhance capabilities, or if existing code is coming in the way of putting in more enhancements), then here are the pointers:
You need to be confident in your own abilities first of all to do this. That comes only if you have enough prior experience with the technologies involved.
Communicate, communicate, communicate. Let all involved stake-holders know, this is a mess, this is risky, this cannot be done in a hurry, this will need to be done piece-meal - attack one area at a time.
Understand the system inside out. Document every nuance, trick and hack. Document the overall design. Ask any old-timers about historical reasons for the existence of any code you cannot justify. These are the mines you don't want to step on - you might think those are useless pieces of code and then regret later after getting rid of them.
Unit test. Work the system through any test-suite which already exists, otherwise first write the tests for existing code, if they don't exist.
Spew debugging code all over the place during the rewrite - asserts, logging, console prints (you should have the ability to turn them on and off, as well specify different levels of output i.e. control verbosity). This is a must in my experience, and helps tremendously during a rewrite.
When going through the code, make a list of all things that need to be done - things you need to find out, things you need to write tests for, things you need to ask questions about, notes to remind you how to refactor some piece of code, anything that can affect your rewrite... you cannot afford to forget anything! I do this using Outlook Tasks (just make sure whatever you use is always in front of you - this is the first app I open as soon as I sit down on the desk). If I get interrupted, I write down anything that I have been thinking about and hints about where to continue after coming back to the task.
Try avoiding hacks in your rewrite (that's one of the reasons you are rewriting it). Think about tough problems you encounter. Discuss them with other people and bounce off your ideas against them (nothing beats this), and put in clean solutions. Look at all the tasks you put into the todo list - make a 10,000 feet picture of existing design, then decide how the new rewrite would look like (in terms of modules, sub-modules, how they fit together etc.).
Tackle the toughest problems before any other. That'll save you from running into problems you cannot solve near the end of tunnel, and save you from taking any steps backward. Of course, you need to know what the toughest problems will be - so again, better document everything first during your forays into existing code.
Get a very firm list of requirements.
Make sure you have implicit requirements as well as explicit ones - i.e. what programs it has to work with, and how.
Write all scenarios and use cases for how it is currently being used.
Write a lot of unit tests.
Write a lot of integration tests to test the integration of the program with existing programs it has to work with.
Talk to everyone who uses the program to find out more implicit requirements.
Test, test, test changes before moving into production.
CYA :)
Two things, beyond #Sudhanshu's great list (and, to some extent, disagreeing with his #8):
First, be aware that untested code is buggy code - what you are starting with almost certainly does not work correctly, for any definition of "correct" other than "works just like the unmodified code". That is, be prepared to find unexpected behavior in the system, to ask experts in the system about that behavior, and for them to conclude that it's not working the way it should. Prepare them for it to - warn them that without tests or other documentation, there's no reason to think it works they way they think it's working.
Next: Refactor The Low-Hanging Fruit Take it easy, take it slow, take it very careful. Notice something easy in the code - duplication, say - and test the hell out of whatever methods contain the duplication, then eliminate it. Lather, rinse, repeat. Don't write tests for everything before making changes, but write tests for whatever you're changing. This way, it stays releasable at every stage and you are continuously adding value, continuously improving the code base.
I said "two things", but I guess I'll add a third: Manage expectations. Let your customer know how scared you are of this task; let them know how bad what they've got is. Let them know how slow progress will be, and let them know you'll keep them informed of that progress (and, of course, do it). Your customer may think s/he's asking for "just a little fix" - and the functionality may indeed change only a little - but that doesn't mean it's not going to be a lot of work and a lot of time. You understand that; your customer needs to, too.
I've had this problem before and I've asked around (before the days of stack overflow) and this book has always been recommended to me. http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
Ask yourself this: what are you trying to achieve? What is your mission? How much time do you have? What is the measurement for success? What risks are there? How do you mitigate and deal with them?
Don't touch anything unless you know what it is you're trying to achieve.
The code might be "bad" but what does that mean? The code works right? So if you rewrite the code so it does the same thing you'll have spent a lot of time rewriting something introducing bugs along the way so the code does the same thing? To what end?
The simplest thing you can do is document what the system does. And I don't mean write mind-numbing Word documents no one will ever read. I mean writing tests on key functionality, refactoring the code if necessary to allow such tests to be written.
You said you are scared to touch the code because of legal, income loss and that there is zero documentation. So do you understand the code? The first thing you should do is document it and make sure you understand it before you even think about refactoring. Once you have done that and identified the problem areas make a list of your refactoring proposals in the order of maximum benefit with minimum changes and attack it incrementally. Refactoring makes additional sense if: the expected lifespan of the code will be long, new features will be added, bug fixes are numerous. As for testing the database state - I worked on a project recently where that is exactly what we did with success.
Is it possible to get a separation of the DB and non-DB parts, so that a DBA can take on the challenge of the stored procedures and databases themselves freeing you up to work on the other parts of the system? This also presumes that there is a DBA who can step up and take that part of the application.
If that isn't possible, then I'd make the suggestion of seeing how big is the codebase and if it is possible to get some assistance so it isn't all on you. While this could be seen as side-stepping responsibility, the point would be that things shouldn't be in just one person's hands usually as they can disappear at times.
Good luck!

Testing approach for algorithms with complex outputs

How to test a result of a program that is basically a black box? For example one year ago I had to write a B tree as a homework and I really struggled with testing the correctness. What strategies do you use in such scenarios? Visualization? Robust input-->result sets of testing data? What do you do when it is hard to get such data because the only way how to get them is your proper working program?
EDIT: I think that my question was misunderstood. There was no problem with understanding how B tree works. That is trivial. But writing robust tests for validating its proper functionality is not so trivial. I think that this school problem is similar to many practical REAL word scenarios and test cases. And sometimes understanding the domain is quite different from delivering working and correct program...
EDIT2: And yes, with B tree it is possible to validate proper behavior with pen and paper. But this is really dirty and not fun :) This is not working well with problems that requires huge amount of data for their validation...
I'm not sure these answers really capture the problem at hand. A B-tree's input and output aren't any different from those of any other dictionary---but the algorithm performs better, if it's implemented correctly. It's only really got two functions to test (add, and find) so theoretically, "black-box" testing of this single component should be fine. Designing for testability isn't the issue, since no matter how you do it the whole algorithm will be one component.
So the question is: when you have to implement subtle algorithms, the kinds with complicated output that you can't always understand in your head so well, how do you test them? I think there are three different strategies you can use:
Black-box test basic functionality. For the B-tree case, this is things like cwash suggested, and also, things like making sure that when you add an item, you can then find it, etc.
Test certain invariants that your algorithm should maintain (the B-tree should be balanced, values within nodes should be sorted, etc.)
A few, small "pencil-and-paper" tests may be necessary -- work the algorithm out by hand and check that it matches what your code does. But the big-data tests can all be of type 2. These can also be brittle, so unless you need to be really sure about your algorithm, you may want to avoid them.
If you do not grasp the problem at hand, how can you develop a solution to it? My suggestion would be to understand the domain enough to be able to work out the problem on paper and ensure that your program matches.
Consult with an expert on the subject.
I know if I have a convoluted procedure I'm trying to fix, I have no idea what the output should be after my changes, so I need to consult a fellow developer with more knowledge of the business need, and they are able to verify what I've done is correct.
I would focus on constructing test cases that exercise the functionality of your B-tree algorithm. I haven't looked at it for years, but I'm fairly sure you'll be able to find a documented sequence of steps to insert a set of values in a specific order, then validate that the leaf nodes are as they should be. If you construct your testing along these lines, you should be able to prove your implementation is correct.
The key is to know there is a balance between testing something to death and doing tests that adequately cover what should be covered. Edge cases, e.g null inputs or checking inputs are numeric by testing an alphabet character or a punctuation character, are likely most of the tests you'd need. To complement this there may be one or two common cases to handle to show the program can handle a non-edge case as well. To cover all valid input in most programs is overkill and would result in an overwhelmingly large amount of tests.
I think the answer to the question you're asking boils down to designing for testability. Often you get a testable design for free when you test-drive the development of the solution. But let's face it, when you're implementing a highly mathematical algorithm, this just doesn't fall out.
To make sure you have a testable design, you need to understand what a seam is. Then you need to know a few rules of thumb, such as avoiding statics, using polymorphism, and properly decomposing problems and separating concerns.
Watch "The Clean Code Talks -- Unit Testing" by Misko Hevery, I think it will help you wrap your head around it.
Try looking at it from a requirements point of view, rather than an implementation point of view. Before you write code, you must understand exactly what you want it to do.
Testing and requirements should be a matching pair. If you're having trouble defining tests, maybe it's because the requirements are not well-defined. That in turn implies that you may have bugs that aren't so much implementation bugs, but "lack of clear requirements" bugs. The code writer in that case would be working to a mental list of requirements that he/she thinks is requirements, but can't be sure, and they're not written down for independent understanding and verification.
I've struggled with software where the requirements weren't clear, because the customer couldn't even tell us what they wanted. But when we delivered to them, they sure could tell us then what they didn't like about it! A big part of software engineering is getting the requirements right before the coding begins. This is true on the high-level (overall product, with requirement input from customer) and also the smaller level (modules, individual functions, where requirements are internally defined by software team or individuals). It is still true to some degree I think for iterative development, although the high-level requirements are more fluid.
#Bystrik Jurina,
I often get involved in projects which involve conversions between disparate data formats. Most answers have focused on testing a B-tree or similar algorithm, but it seems that you're looking for a more general answer.
Most of my work is based on the command line. It may sounds like a contradiction, but one of the first tools I use is visualization. I'll write some methods to write out my data structures in a format that's easy to consume. This can (and usually does) include something that's visually clear. But often it also means something that I could easily parse with a smaller test program, or even import into Excel.
I'll start by focusing on the basic outline, and write a program that does the bare minimum of what I need to accomplish. If it's a multi-step process, this might mean implementing one step at a time and validating the results of each step before moving on. Or writing something that works only in specific cases, and then expanding the set of cases where it's expected to work. At first you can validate that the code works in the limited set of cases, such as for known input data. As the project moves forward, you can start logging warnings for cases you might not have tested, or for unexpected types of input data. This has drawbacks, but is a nice approach when you're dealing with a known set of input data
Validation techniques can include formal test cases, or informal programs that work to challenge your assumptions. It could mean writing a basic driver program to exercise the "core" routines. A good example would be to add a record to a database, then read it back and compare the original object against the one loaded from the database.
If you have trouble wrapping your head around the way a program functions, think about what it needs to accomplish. It might be easier to writing code that tests the way different inputs produce different outputs. Producing visualizations is a good help, because the act of deciding how to display the data can make you think about different conditions and focus in on the most critical parts of your data structures.
Often I've found that building a visualization brings me to admit that the way the data is being stored just isn't very clear. For a B-tree, the representation isn't very flexible. But for other cases, you may be using parallel arrays when a nested tree of objects would be more natural.

Does YAGNI also apply when writing tests?

When I write code I only write the functions I need as I need them.
Does this approach also apply to writing tests?
Should I write a test in advance for every use-case I can think of just to play it safe or should I only write tests for a use-case as I come upon it?
I think that when you write a method you should test both expected and potential error paths. This doesn't mean that you should expand your design to encompass every potential use -- leave that for when it's needed, but you should make sure that your tests have defined the expected behavior in the face of invalid parameters or other conditions.
YAGNI, as I understand it, means that you shouldn't develop features that are not yet needed. In that sense, you shouldn't write a test that drives you to develop code that's not needed. I suspect, though, that's not what you are asking about.
In this context I'd be more concerned with whether you should write tests that cover unexpected uses -- for example, errors due passing null or out of range parameters -- or repeating tests that only differ with respect to the data, not the functionality. In the former case, as I indicated above, I would say yes. Your tests will document the expected behavior of your method in the face of errors. This is important information to people who use your method.
In the latter case, I'm less able to give you a definitive answer. You certainly want your tests to remain DRY -- don't write a test that simply repeats another test even if it has different data. Alternatively, you may not discover potential design issues unless you exercise the edge cases of your data. A simple example is a method that computes a sum of two integers: what happens if you pass it maxint as both parameters? If you only have one test, then you may miss this behavior. Obviously, this is related to the previous point. Only you can be sure when a test is really needed or not.
Yes YAGNI absolutely applies to writing tests.
As an example, I, for one, do not write tests to check any Properties. I assume that properties work a certain way, and until I come to one that does something different from the norm, I won't have tests for them.
You should always consider the validity of writing any test. If there is no clear benefit to you in writing the test, then I would advise that you don't. However, this is clearly very subjective, since what you might think is not worth it someone else could think is very worth the effort.
Also, would I write tests to validate input? Absolutely. However, I would do it to a point. Say you have a function with 3 parameters that are ints and it returns a double. How many tests are you going to write around that function. I would use YAGNI here to determine which tests are going to get you a good ROI, and which are useless.
Write the test as you need it. Tests are code. Writing a bunch of (initially failing) tests up front breaks the red/fix/green cycle of TDD, and makes it harder to identify valid failures vs. unwritten code.
You should write the tests for the use cases you are going to implement during this phase of development.
This gives the following benefits:
Your tests help define the functionality of this phase.
You know when you've completed this phase because all of your tests pass.
You should write tests that cover all your code, ideally. Otherwise, the rest of your tests lose value, and you will in the end debug that piece of code repeatedly.
So, no. YAGNI does not include tests :)
There is of course no point in writing tests for use cases you're not sure will get implemented at all - that much should be obvious to anyone.
For use cases you know will get implemented, test cases are subject to diminishing returns, i.e. trying to cover each and every possible obscure corner case is not a useful goal when you can cover all important and critical paths with half the work - assuming, of course, that the cost of overlooking a rarely occurring error is endurable; I would certainly not settle for anything less than 100% code and branch coverage when writing avionics software.
You'll probably get some variance here, but generally, the goal of writing tests (to me) is to ensure that all your code is functioning as it should, without side effects, in a predictable fashion and without defects. In my mind, then, the approach you discuss of only writing tests for use cases as they are come upon does you no real good, and may in fact cause harm.
What if the particular use case for the unit under test that you ignore causes a serious defect in the final software? Has the time spent developing tests bought you anything in this scenario beyond a false sense of security?
(For the record, this is one of the issues I have with using code coverage to "measure" test quality -- it's a measurement that, if low, may give an indication that you're not testing enough, but if high, should not be used to assume that you are rock-solid. Get the common cases tested, the edge cases tested, then consider all the ifs, ands and buts of the unit and test them, too.)
Mild Update
I should note that I'm coming from possibly a different perspective than many here. I often find that I'm writing library-style code, that is, code which will be reused in multiple projects, for multiple different clients. As a result, it is generally impossible for me to say with any certainty that certain use cases simply won't happen. The best I can do is either document that they're not expected (and hence may require updating the tests afterward), or -- and this is my preference :) -- just writing the tests. I often find option #2 is for more livable on a day-to-day basis, simply because I have much more confidence when I'm reusing component X in new application Y. And confidence, in my mind, is what automated testing is all about.
You should certainly hold off writing test cases for functionality you're not going to implement yet. Tests should only be written for existing functionality or functionality you're about to put in.
However, use cases are not the same as functionality. You only need to test the valid use cases that you've identified, but there's going to be a lot of other things that might happen, and you want to make sure those inputs get a reasonable response (which could well be an error message).
Obviously, you aren't going to get all the possible use cases; if you could, there'd be no need to worry about computer security. You should get at least the more plausible ones, and as problems come up you should add them to the use cases to test.
I think the answer here is, as it is in so many places, it depends. If the contract that a function presents states that it does X, and I see that it's got associated unit tests, etc., I'm inclined to think it's a well-tested unit and use it as such, even if I don't use it that exact way elsewhere. If that particular usage pattern is untested, then I might get confusing or hard-to-trace errors. For this reason, I think a test should cover all (or most) of the defined, documented behavior of a unit.
If you choose to test more incrementally, I might add to the doc comments that the function is "only tested for [certain kinds of input], results for other inputs are undefined".
I frequently find myself writing tests, TDD, for cases that I don't expect the normal program flow to invoke. The "fake it 'til you make it" approach has me starting, generally, with a null input - just enough to have an idea in mind of what the function call should look like, what types its parameters will have and what type it will return. To be clear, I won't just send null to the function in my test; I'll initialize a typed variable to hold the null value; that way when Eclipse's Quick Fix creates the function for me, it already has the right type. But it's not uncommon that I won't expect the program normally to send a null to the function. So, arguably, I'm writing a test that I AGN. But if I start with values, sometimes it's too big a chunk. I'm both designing the API and pushing its real implementation from the beginning. So, by starting slow and faking it 'til I make it, sometimes I write tests for cases I don't expect to see in production code.
If you're working in a TDD or XP style, you won't be writing anything "in advance" as you say, you'll be working on a very precise bit of functionality at any given moment, so you'll be writing all the necessary tests in order make sure that bit of functionality works as you intend it to.
Test code is similar with "code" itself, you won't be writing code in advance for every use cases your app has, so why would you write test code in advance ?

Can invariant testing replace unit testing?

As a programmer, I have bought whole-heartedly into the TDD philosophy and take the effort to make extensive unit tests for any nontrivial code I write. Sometimes this road can be painful (behavioral changes causing cascading multiple unit test changes; high amounts of scaffolding necessary), but on the whole I refuse to program without tests that I can run after every change, and my code is much less buggy as a result.
Recently, I've been playing with Haskell, and it's resident testing library, QuickCheck. In a fashion distinctly different from TDD, QuickCheck has an emphasis on testing invariants of the code, that is, certain properties that hold over all (or substantive subsets) of inputs. A quick example: a stable sorting algorithm should give the same answer if we run it twice, should have increasing output, should be a permutation of the input, etc. Then, QuickCheck generates a variety of random data in order to test these invariants.
It seems to me, at least for pure functions (that is, functions without side effects--and if you do mocking correctly you can convert dirty functions into pure ones), that invariant testing could supplant unit testing as a strict superset of those capabilities. Each unit test consists of an input and an output (in imperative programming languages, the "output" is not just the return of the function but also any changed state, but this can be encapsulated). One could conceivably created a random input generator that is good enough to cover all of the unit test inputs that you would have manually created (and then some, because it would it would generate cases that you wouldn't have thought of); if you find a bug in your program due to some boundary condition, you improve your random input generator so that it generates that case too.
The challenge, then, is whether or not it's possible to formulate useful invariants for every problem. I'd say it is: it's a lot simpler once you have an answer to see if it's correct than it is to calculate the answer in the first place. Thinking about invariants also helps clarify the specification of a complex algorithm much better than ad hoc test cases, which encourage a kind of case-by-case thinking of the problem. You could use a previous version of your program as a model implementation, or a version of a program in another language. Etc. Eventually, you could cover all of your former test-cases without having to explicitly code an input or an output.
Have I gone insane, or am I on to something?
A year later, I now think I have an answer to this question: No! In particular, unit tests will always be necessary and useful for regression tests, in which a test is attached to a bug report and lives on in the codebase to prevent that bug from ever coming back.
However, I suspect that any unit test can be replaced with a test whose inputs are randomly generated. Even in the case of imperative code, the “input” is the order of imperative statements you need to make. Of course, whether or not it’s worth creating the random data generator, and whether or not you can make the random data generator have the right distribution is another question. Unit testing is simply a degenerate case where the random generator always gives the same result.
What you've brought up is a very good point - when only applied to functional programming. You stated a means of accomplishing this all with imperative code, but you also touched on why it's not done - it's not particularly easy.
I think that's the very reason it won't replace unit testing: it doesn't fit for imperative code as easily.
Doubtful
I've only heard of (not used) these kinds of tests, but I see two potential issues. I would love to have comments about each.
Misleading results
I've heard of tests like:
reverse(reverse(list)) should equal list
unzip(zip(data)) should equal data
It would be great to know that these hold true for a wide range of inputs. But both these tests would pass if the functions just return their input.
It seems to me that you'd want to verify that, eg, reverse([1 2 3]) equals [3 2 1] to prove correct behavior in at least one case, then add some testing with random data.
Test complexity
An invariant test that fully describes the relationship between the input and output might be more complex than the function itself. If it's complex, it could be buggy, but you don't have tests for your tests.
A good unit test, by contrast, is too simple to screw up or misunderstand as a reader. Only a typo could create a bug in "expect reverse([1 2 3]) to equal [3 2 1]".
What you wrote in your original post, reminded me of this problem, which is an open question as to what the loop invariant is to prove the loop correct...
anyways, i am not sure how much you have read in formal spec, but you are heading down that line of thought. david gries's book is one the classics on the subject, I still haven't mastered the concept well enough to use it rapidly in my day to day programming. the usual response to formal spec is, its hard and complicated, and only worth the effort if you are working on safety critical systems. but i think there are back of envelope techniques similar to what quickcheck exposes that can be used.