Can invariant testing replace unit testing? - unit-testing

As a programmer, I have bought whole-heartedly into the TDD philosophy and take the effort to make extensive unit tests for any nontrivial code I write. Sometimes this road can be painful (behavioral changes causing cascading multiple unit test changes; high amounts of scaffolding necessary), but on the whole I refuse to program without tests that I can run after every change, and my code is much less buggy as a result.
Recently, I've been playing with Haskell, and it's resident testing library, QuickCheck. In a fashion distinctly different from TDD, QuickCheck has an emphasis on testing invariants of the code, that is, certain properties that hold over all (or substantive subsets) of inputs. A quick example: a stable sorting algorithm should give the same answer if we run it twice, should have increasing output, should be a permutation of the input, etc. Then, QuickCheck generates a variety of random data in order to test these invariants.
It seems to me, at least for pure functions (that is, functions without side effects--and if you do mocking correctly you can convert dirty functions into pure ones), that invariant testing could supplant unit testing as a strict superset of those capabilities. Each unit test consists of an input and an output (in imperative programming languages, the "output" is not just the return of the function but also any changed state, but this can be encapsulated). One could conceivably created a random input generator that is good enough to cover all of the unit test inputs that you would have manually created (and then some, because it would it would generate cases that you wouldn't have thought of); if you find a bug in your program due to some boundary condition, you improve your random input generator so that it generates that case too.
The challenge, then, is whether or not it's possible to formulate useful invariants for every problem. I'd say it is: it's a lot simpler once you have an answer to see if it's correct than it is to calculate the answer in the first place. Thinking about invariants also helps clarify the specification of a complex algorithm much better than ad hoc test cases, which encourage a kind of case-by-case thinking of the problem. You could use a previous version of your program as a model implementation, or a version of a program in another language. Etc. Eventually, you could cover all of your former test-cases without having to explicitly code an input or an output.
Have I gone insane, or am I on to something?

A year later, I now think I have an answer to this question: No! In particular, unit tests will always be necessary and useful for regression tests, in which a test is attached to a bug report and lives on in the codebase to prevent that bug from ever coming back.
However, I suspect that any unit test can be replaced with a test whose inputs are randomly generated. Even in the case of imperative code, the “input” is the order of imperative statements you need to make. Of course, whether or not it’s worth creating the random data generator, and whether or not you can make the random data generator have the right distribution is another question. Unit testing is simply a degenerate case where the random generator always gives the same result.

What you've brought up is a very good point - when only applied to functional programming. You stated a means of accomplishing this all with imperative code, but you also touched on why it's not done - it's not particularly easy.
I think that's the very reason it won't replace unit testing: it doesn't fit for imperative code as easily.

Doubtful
I've only heard of (not used) these kinds of tests, but I see two potential issues. I would love to have comments about each.
Misleading results
I've heard of tests like:
reverse(reverse(list)) should equal list
unzip(zip(data)) should equal data
It would be great to know that these hold true for a wide range of inputs. But both these tests would pass if the functions just return their input.
It seems to me that you'd want to verify that, eg, reverse([1 2 3]) equals [3 2 1] to prove correct behavior in at least one case, then add some testing with random data.
Test complexity
An invariant test that fully describes the relationship between the input and output might be more complex than the function itself. If it's complex, it could be buggy, but you don't have tests for your tests.
A good unit test, by contrast, is too simple to screw up or misunderstand as a reader. Only a typo could create a bug in "expect reverse([1 2 3]) to equal [3 2 1]".

What you wrote in your original post, reminded me of this problem, which is an open question as to what the loop invariant is to prove the loop correct...
anyways, i am not sure how much you have read in formal spec, but you are heading down that line of thought. david gries's book is one the classics on the subject, I still haven't mastered the concept well enough to use it rapidly in my day to day programming. the usual response to formal spec is, its hard and complicated, and only worth the effort if you are working on safety critical systems. but i think there are back of envelope techniques similar to what quickcheck exposes that can be used.

Related

When doing TDD, why should I do "just enough" to get a test passing?

Looking at posts like this and others, it seems that the correct way to do TDD is to write a test for a feature, get just that feature to pass, and then add another test and refactor as necessary until it passes, then repeat.
My question is: why is this approach used? I completely understand the write tests first idea, because it helps your design. But why wouldn't I create all tests for a specific function, and then implement that function all at once until all tests pass?
The approach comes from the Extreme Programming principal of You Aren't Going to Need It. If you actually write a single test and then the code that makes it pass then repeating that process you usually find that you write just enough to get things working. You don't invent new features that are not needed. You don't handle corner cases that don't exist.
Try an experiment. Write out the list of tests you think you need. Set it aside. Then go with the one test at a time approach. See if the lists differ and why. When I do that I almost always end up with fewer tests. I almost always find that I invented a case that I didn't need if I do it the all the tests first way.
For me, it is about "thought burden." If I have all of the possible behaviors to worry about at once, my brain is strained. If I approach them one at a time, I can give full attention to solving the immediate problem.
I believe this derives from the principle of "YAGNI" ("You're Ain't Gonna Need It")(*), which states that classes should be as simple as necessary, with no extra features. Hence when you need a feature, you write a test for it, then you write the feature, then you stop. If you wrote a number of tests first, clearly you would be merely speculating on what your API would need to be at some point in the future.
(*) I generally translate that as "You are too stupid to know what will be needed in the future", but that's another topic......
imho it reduces the chance of over engineering the piece of code you are writing.
Its just easier to add unnecessary code when you are looking at different usage scenarios.
Dan North has suggested that there is no such thing as test-driven design because the design is not really driven out by testing -- that these unit tests only become tests once functionality is implemented, but during the design phase you are really designing by example.
This makes sense -- your tests are setting up a range of sample data and conditions with which the system under test is going to operate, and you drive out design based on these example scenarios.
Some of the other answers suggest that this is based on YAGNI. This is partly true.
Beyond that, though, there is the issue of complexity. As is often stated, programming is about managing complexity -- breaking things down into comprehensible units.
If you write 10 tests to cover cases where param1 is null, param2 is null, string1 is empty, int1 is negative, and the current day of the week is a weekend, and then go to implement that, you are having to juggle a lot of complexity at once. This opens up space to introduce bugs, and it becomes very difficult to sort out why tests are failing.
On the other hand, if you write the first test to cover an empty string1, you barely have to think about the implementation. Once the test is passing, you move on to a case where the current day is a weekend. You look at the existing code and it becomes obvious where the logic should go. You run tests and if the first test is now failing, you know that you broke it while implementing the day-of-the-week thing. I'd even recommend that you commit source between tests so that if you break something you can always revert to a passing state and try again.
Doing just a little at a time and then verifying that it works dramatically reduces the space for the introduction of defects, and when your tests fail after implementation you have changed so little code that it is very easy to identify the defect and correct it, because you know that the existing code was already working properly.
This is a great question. You need to find a balance between writing all tests in the universe of possible tests, and the most likely user scenarios. One test is, IMHO, not enough, and I typically like to write 3 or 4 tests which represent the most common uses of the feature. I also like to write a best case test and a worst case test as well.
Writing many tests helps you to anticipate and understand the potential use of your feature.
I believe TDD advocates writing one test at a time because it forces you to think in terms of the principle of doing the simplest thing that could possibly work at each step of development.
I think the article you sent is exactly the answer. If you write all the tests first and all of the scenarios first, you will probably write your code to handle all of those scenarios at once and most of the time you probably end up with code that is fairly complex to handle all of these.
On the other hand, if you go one at a time, you will end up refactoring your existing code each time to end up with code probably as simple as it can be for all the scenarios.
Like in the case of the link you gave in your question, had they written all the tests first, I am pretty sure they would have not ended up with a simple if/else statement, but probably a fairly complex recursive piece of code.
The reason behind the principle is simple. How practical it is to stick to is a separate question.
The reason is that if you are writing more code that what is needed to pass the current test you are writing code that is, by definition, untested. (It's nothing to do with YAGNI.)
If you write the next test to "catch up" with the production code then you've just written a test that you haven't seen fail. The test may be called "TestNextFeature" but it may as well return true for all the evidence you have on it.
TDD is all about making sure that all code - production and tests - is tested and that all those pesky "but I'm sure I wrote it right" bugs don't get into the code.
I would do as you suggest. Write several tests for a specific function, implement the function, and ensure that all of the tests for this function pass. This ensures that you understand the purpose and usage of the function separately from your implementation of it.
If you need to do a lot more implementation wise than what is tested by your unit tests, then your unit tests are likely not comprehensive enough.
I think part of that idea is to keep simplicity, keep to designed/planned features, and make sure that your tests are sufficient.
Lots of good answers above - YAGNI is the first answer that jumps to mind.
The other important thing about the 'just get the test passing' guideline though, is that TDD is actually a three stage process:
Red > Green > Refactor
Frequently revisiting the final part, the refactoring, is where a lot of the value of TDD is delivered in terms of cleaner code, better API design, and more confidence in the software. You need to refactor in really small short blocks though lest the task become too big.
It is hard to get into this habit, but stick with it, as it's an oddly satisfying way to work once you get into the cycle.

Unit Testing Machine Learning Code

I am writing a fairly complicated machine learning program for my thesis in computer vision. It's working fairly well, but I need to keep trying out new things out and adding new functionality. This is problematic because I sometimes introduce bugs when I am extending the code or trying to simplify an algorithm.
Clearly the correct thing to do is to add unit tests, but it is not clear how to do this. Many components of my program produce a somewhat subjective answer, and I cannot automate sanity checks.
For example, I had some code that approximated a curve with a lower-resolution curve, so that I could do computationally intensive work on the lower-resolution curve. I accidentally introduced a bug into this code, and only found it through a painstaking search when my the results of my entire program got slightly worse.
But, when I tried to write a unit-test for it, it was unclear what I should do. If I make a simple curve that has a clearly correct lower-resolution version, then I'm not really testing out everything that could go wrong. If I make a simple curve and then perturb the points slightly, my code starts producing different answers, even though this particular piece of code really seems to work fine now.
You may not appreciate the irony, but basically what you have there is legacy code: a chunk of software without any unit tests. Naturally you don't know where to begin. So you may find it helpful to read up on handling legacy code.
The definitive thought on this is Michael Feather's book, Working Effectively with Legacy Code. There used to be a helpful summary​ of that on the ObjectMentor site, but alas the website has gone the way of the company. However WELC has left a legacy in reviews and other articles. Check them out (or just buy the book), although the key lessons are the ones which S.Lott and tvanfosson cover in their replies.
2019 update: I have fixed the link to the WELC summary with a version from the Wayback Machine web archive (thanks #milia).
Also - and despite knowing that answers which comprise mainly links to other sites are low quality answers :) - here is a link to a new (2019 new) Google tutorial on Testing and Debugging ML code. I hope this will be of illumination to future Seekers who stumble across this answer.
"then I'm not really testing out everything that could go wrong."
Correct.
The job of unit tests is not to test everything that could go wrong.
The job of unit tests is to test that what you have does the right thing, given specific inputs and specific expected results. The important part here is the specific visible, external requirements are satisfied by specific test cases. Not that every possible thing that could go wrong is somehow prevented.
Nothing can test everything that could go wrong. You can write a proof, but you'll be hard-pressed to write tests for everything.
Choose your test cases wisely.
Further, the job of unit tests is to test that each small part of the overall application does the right thing -- in isolation.
Your "code that approximated a curve with a lower-resolution curve" for example, probably has several small parts that can be tested as separate units. In isolation. The integrated whole could also be tested to be sure that it works.
Your "computationally intensive work on the lower-resolution curve" for example, probably has several small parts that can be tested as separate units. In isolation.
That point of unit testing is to create small, correct units that are later assembled.
Without seeing your code, it's hard to tell, but I suspect that you are attempting to write tests at too high a level. You might want to think about breaking your methods down into smaller components that are deterministic and testing these. Then test the methods that use these methods by providing mock implementations that return predictable values from the underlying methods (which are probably located on a different object). Then you can write tests that cover the domain of the various methods, ensuring that you have coverage of the full range of possible outcomes. For the small methods you do so by providing values that represent the domain of inputs. For the methods that depend on these, by providing mock implementations that return the range of outcomes from the dependencies.
Your unit tests need to employ some kind of fuzz factor, either by accepting approximations, or using some kind of probabilistic checks.
For example, if you have some function that returns a floating point result, it is almost impossible to write a test that works correctly across all platforms. Your checks would need to perform the approximation.
TEST_ALMOST_EQ(result, 4.0);
Above TEST_ALMOST_EQ might verify that result is between 3.9 and 4.1 (for example).
Alternatively, if your machine learning algorithms are probabilistic, your tests will need to accommodate for it by taking the average of multiple runs and expecting it to be within some range.
x = 0;
for (100 times) {
x += result_probabilistic_test();
}
avg = x/100;
TEST_RANGE(avg, 10.0, 15.0);
Ofcourse, the tests are non-deterministic, so you will need to tune them such that you can get non-flaky tests with a high probability. (E.g., increase the number of trials, or increase the range of error).
You can also use mocks for this (e.g, a mock random number generator for your probabilistic algorithms), and they usually help for deterministically testing specific code paths, but they are a lot of effort to maintain. Ideally, you would use a combination of fuzzy testing and mocks.
HTH.
Generally, for statistical measures you would build in an epsilon for your answer. I.E. the mean square difference of your points would be < 0.01 or some such. Another option is to run several times and if it fails "too often" then you have an issue.
Get an appropriate test dataset (maybe a subset of what your using usually)
Calculate some metric on this dataset (e.g. the accuracy)
Note down the value obtained (cross-validated)
This should give an indication of what to set the threshold for
Of course if can be that when making changes to your code the performance on the dataset will increase a little, but if it ever decreases by large this would be an indication something is going wrong.

Does YAGNI also apply when writing tests?

When I write code I only write the functions I need as I need them.
Does this approach also apply to writing tests?
Should I write a test in advance for every use-case I can think of just to play it safe or should I only write tests for a use-case as I come upon it?
I think that when you write a method you should test both expected and potential error paths. This doesn't mean that you should expand your design to encompass every potential use -- leave that for when it's needed, but you should make sure that your tests have defined the expected behavior in the face of invalid parameters or other conditions.
YAGNI, as I understand it, means that you shouldn't develop features that are not yet needed. In that sense, you shouldn't write a test that drives you to develop code that's not needed. I suspect, though, that's not what you are asking about.
In this context I'd be more concerned with whether you should write tests that cover unexpected uses -- for example, errors due passing null or out of range parameters -- or repeating tests that only differ with respect to the data, not the functionality. In the former case, as I indicated above, I would say yes. Your tests will document the expected behavior of your method in the face of errors. This is important information to people who use your method.
In the latter case, I'm less able to give you a definitive answer. You certainly want your tests to remain DRY -- don't write a test that simply repeats another test even if it has different data. Alternatively, you may not discover potential design issues unless you exercise the edge cases of your data. A simple example is a method that computes a sum of two integers: what happens if you pass it maxint as both parameters? If you only have one test, then you may miss this behavior. Obviously, this is related to the previous point. Only you can be sure when a test is really needed or not.
Yes YAGNI absolutely applies to writing tests.
As an example, I, for one, do not write tests to check any Properties. I assume that properties work a certain way, and until I come to one that does something different from the norm, I won't have tests for them.
You should always consider the validity of writing any test. If there is no clear benefit to you in writing the test, then I would advise that you don't. However, this is clearly very subjective, since what you might think is not worth it someone else could think is very worth the effort.
Also, would I write tests to validate input? Absolutely. However, I would do it to a point. Say you have a function with 3 parameters that are ints and it returns a double. How many tests are you going to write around that function. I would use YAGNI here to determine which tests are going to get you a good ROI, and which are useless.
Write the test as you need it. Tests are code. Writing a bunch of (initially failing) tests up front breaks the red/fix/green cycle of TDD, and makes it harder to identify valid failures vs. unwritten code.
You should write the tests for the use cases you are going to implement during this phase of development.
This gives the following benefits:
Your tests help define the functionality of this phase.
You know when you've completed this phase because all of your tests pass.
You should write tests that cover all your code, ideally. Otherwise, the rest of your tests lose value, and you will in the end debug that piece of code repeatedly.
So, no. YAGNI does not include tests :)
There is of course no point in writing tests for use cases you're not sure will get implemented at all - that much should be obvious to anyone.
For use cases you know will get implemented, test cases are subject to diminishing returns, i.e. trying to cover each and every possible obscure corner case is not a useful goal when you can cover all important and critical paths with half the work - assuming, of course, that the cost of overlooking a rarely occurring error is endurable; I would certainly not settle for anything less than 100% code and branch coverage when writing avionics software.
You'll probably get some variance here, but generally, the goal of writing tests (to me) is to ensure that all your code is functioning as it should, without side effects, in a predictable fashion and without defects. In my mind, then, the approach you discuss of only writing tests for use cases as they are come upon does you no real good, and may in fact cause harm.
What if the particular use case for the unit under test that you ignore causes a serious defect in the final software? Has the time spent developing tests bought you anything in this scenario beyond a false sense of security?
(For the record, this is one of the issues I have with using code coverage to "measure" test quality -- it's a measurement that, if low, may give an indication that you're not testing enough, but if high, should not be used to assume that you are rock-solid. Get the common cases tested, the edge cases tested, then consider all the ifs, ands and buts of the unit and test them, too.)
Mild Update
I should note that I'm coming from possibly a different perspective than many here. I often find that I'm writing library-style code, that is, code which will be reused in multiple projects, for multiple different clients. As a result, it is generally impossible for me to say with any certainty that certain use cases simply won't happen. The best I can do is either document that they're not expected (and hence may require updating the tests afterward), or -- and this is my preference :) -- just writing the tests. I often find option #2 is for more livable on a day-to-day basis, simply because I have much more confidence when I'm reusing component X in new application Y. And confidence, in my mind, is what automated testing is all about.
You should certainly hold off writing test cases for functionality you're not going to implement yet. Tests should only be written for existing functionality or functionality you're about to put in.
However, use cases are not the same as functionality. You only need to test the valid use cases that you've identified, but there's going to be a lot of other things that might happen, and you want to make sure those inputs get a reasonable response (which could well be an error message).
Obviously, you aren't going to get all the possible use cases; if you could, there'd be no need to worry about computer security. You should get at least the more plausible ones, and as problems come up you should add them to the use cases to test.
I think the answer here is, as it is in so many places, it depends. If the contract that a function presents states that it does X, and I see that it's got associated unit tests, etc., I'm inclined to think it's a well-tested unit and use it as such, even if I don't use it that exact way elsewhere. If that particular usage pattern is untested, then I might get confusing or hard-to-trace errors. For this reason, I think a test should cover all (or most) of the defined, documented behavior of a unit.
If you choose to test more incrementally, I might add to the doc comments that the function is "only tested for [certain kinds of input], results for other inputs are undefined".
I frequently find myself writing tests, TDD, for cases that I don't expect the normal program flow to invoke. The "fake it 'til you make it" approach has me starting, generally, with a null input - just enough to have an idea in mind of what the function call should look like, what types its parameters will have and what type it will return. To be clear, I won't just send null to the function in my test; I'll initialize a typed variable to hold the null value; that way when Eclipse's Quick Fix creates the function for me, it already has the right type. But it's not uncommon that I won't expect the program normally to send a null to the function. So, arguably, I'm writing a test that I AGN. But if I start with values, sometimes it's too big a chunk. I'm both designing the API and pushing its real implementation from the beginning. So, by starting slow and faking it 'til I make it, sometimes I write tests for cases I don't expect to see in production code.
If you're working in a TDD or XP style, you won't be writing anything "in advance" as you say, you'll be working on a very precise bit of functionality at any given moment, so you'll be writing all the necessary tests in order make sure that bit of functionality works as you intend it to.
Test code is similar with "code" itself, you won't be writing code in advance for every use cases your app has, so why would you write test code in advance ?

Testing When Correctness is Poorly Defined?

I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions.
However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic.
Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?
I think you just need to write unit tests based on small sets of data that will make sure that your code is doing exactly what you want it to do. If this gives you a reasonable data-mining algorithm is a separate issue, and I don't think it is possible to solve it by unit tests. There are two "levels" of correctness of your code:
Your code is correctly implementing the given data mining algorithm (this thing you should unit-test)
The data mining algorithm you implement is "correct" - solves the business problem. This is a quite open question, it probably depends both on some parameters of your algorithm as well as on the actual data (different algorithms work for different types of data).
When facing cases like this I tend to build one or more stub data sets that reflect the proper underlying complexities of the real-life data. I often do this together with the customer, to make sure I capture the essence of the complexities.
Then I can just codify these into one or more datasets that can be used as basis for making very specific unit tests (sometimes they're more like integration tests with stub data, but I don't think that's an important distinction). So while your algorithm may have "fuzzy" results for a "generic" dataset, these algorithms almost always have a single correct answer for a specific dataset.
Well, there are a few answers.
First of all, as you mentioned, take a small case study, and do the math by hand. Since you wrote the algorithm, you know what it's supposed to do, so you can do it in a limited case.
The other one is to break down every component of your program into testable parts.
If A calls B calls C calls D, and you know that A,B,C,D, all give the right answer, then you test A->B, B->C, and C->D, then you can be reasonably sure that A->D is giving the correct response.
Also, if there are other programs out there that do what you are looking to do, try and aquire their datasets. Or an opensource project that you could use test data against, and see if your application is giving similar results.
Another way to test datamining code is by taking a test set, and then introducing a pattern of the type you're looking for, and then test again, to see if it will separate out the new pattern from the old ones.
And, the tried and true, walk through your own code by hand and see if the code is doing what you meant it to do.
Really, the challenge here is this: because your application is meant to do a fuzzy, non-deterministic kind of task in a smart way, the very goal you hope to achieve is that the application becomes better than human beings at finding these patterns. That's great, powerful, and cool ... but if you pull it off, then it becomes very hard for any human beings to say, "In this case, the answer should be X."
In fact, ideally the computer would say, "Not really. I see why you think that, but consider these 4.2 terabytes of information over here. Have you read them yet? Based on those, I would argue that the answer should be Z."
And if you really succeeded in your original goal, the end user might sometimes say, "Zowie, you're right. That is a better answer. You found a pattern that is going to make us money! (or save us money, or whatever)."
If such a thing could never happen, then why are you asking the computer to detect these kinds of patterns in the first place?
So, the best thing I can think of is to let real life help you build up a list of test scenarios. If there ever was a pattern discovered in the past that did turn out to be valuable, then make a "unit test" that sees if your system discovers it when given similar data. I say "unit test" in quotes because it may be more like an integration test, but you may still choose to use NUnit or VS.Net or RSpec or whatever unit test tools you're using.
For some of these tests, you might somehow try to "mock" the 4.2 terabytes of data (you won't really mock the data, but at some higher level you'd mock some of the conclusions reached from that data). For others, maybe you have a "test database" with some data in it, from which you expect a set of patterns to be detected.
Also, if you can do it, it would be great if the system could "describe its reasoning" behind the patterns it detects. This would let the business user deliberate over the question of whether the application was right or not.
This is tricky. This sounds similar to writing tests around our text search engine. If you keep struggling, you'll figure something out:
Start with a small, simplified but reasonably representative data sample, and test basic behavior doing this
Rather than asserting that the output is exactly some answer, sometimes it's better to figure out what is important about it. For example, for our search engine, I didn't care so much about the exact order the documents were listed, as long as the three key ones were on the first page of results.
As you make a small, incremental change, figure out what the essence of it is and write a test for that. Even though the overall calculations take many inputs, individual changes to the codebase should be isolatable. For example, we found certain documents weren't being surfaced because of the presence of hyphens in some of the key words. We created tests that testing that this was behaving how we expected.
Look at tools like Fitness, which allow you to throw a large number of datasets at a piece of code and assert things about the results. This may be easier to understand than more traditional unit tests.
I've gone back to the product owner, saying "I can't understand how this will work. How will we know if it's right?" Maybe s/he can articulate the essence of the vaguely defined problem. This has worked really well for me many times, and I've talked people out of features because they couldn't be explained.
Be creative!
Ultimately, you have to decide what your program should be doing, and then test for that.

Is unit testing appropriate for short programs

I'm not a newbie since I've been programming on and off since 1983, but I only have real experience with scripting languages like Applescript, ARexx, HyperTalk and Bash.
I write scripts to automate data entry, batch process images and convert file formats. I dabble at Processing, Ruby and Python.
Most of the programs I write are under 200 lines with at most 10 functions. I wish to write larger, more capable programs in the future. I want to improve my practices to avoid creating fragile, unmaintainable messes. The programming environments that I work in (Script Editor.app and Text Wrangler.app) have no support for automated testing.
At the scale that I'm working now and writing procedural (not OO) code, is it appropriate to write unit tests, which I understand are:
short programs to test individual
functions before combining them into a
fully functioning larger program.
Are unit tests worthwhile compared to their cost when making programs at this scale?
Yes. Anything longer than zero lines can be unit tested - usually to good effect.
I'd look at the likelihood for regressions, not the number of lines of code. If your programs will live a long time and are likely to be refactored or otherwise modified, then unit test may make sense. If the code is throwaway or never likely to be modified, then unit tests probably won't be worthwhile.
Today it's a small project, tomorrow it's the centerpiece to your corporate infrastructure. Start it out right, get 100% code coverage right away.
Unit tests serve a couple of purposes; the most obvious is to test the code to determine that it's doing what it's supposed to do. But one of the other, more useful purposes, is implicit documentation; if you explicitly unit test a piece of code for a specific behavior, it becomes clear that that behavior is anticipated, even if nonobvious.
Consider the humble addition operator. Straightforward, right? Well, what's the expected behavior when adding two signed integers, both of which are > than MAXINT / 2? Is it MAXINT, or is it a negative number?
Documenting all this stuff can be unwieldy, at times; not to say that it shouldn't be done, but experience shows that it frequently doesn't get done. However, an explicit unit test that tests for the above case being negative removes all doubt; as well as serving a valid purpose for regression, that the behavior hasn't changed over time.
totally, you'll know you are not breaking any existing features while adding new ones in a couple of months....
just one of the many plus-value of having a set of test for a piece of code.
Definitely beneficial as writing the tests may help verify the design of your functions (does the API make sense) and helps protected you from mistakes in the in the future. Unit tests can also act as a contract for your functions which can indicate how the functions should be used and what they expect.
With shorter programs that are clear simply by reading the code it may not be beneficial to thoroughly test it if you do not have alot of time. Otherwise I have to agree with my colleagues here that unit testing has a variety of benefits and is helpful for even for small projects.
Warning: The following links may not be applicable but the ensuing discussion is a good read.
There has been some discussion recently within the blogosphere as to when testing is appropriate, specifically Test Driven Development (TDD). You might want to check out some of the articles such as these three articles by Roy Osherove.
If the code logically is divisible into smaller units, then unit tests are appropriate. If the code cannot be sensibly divided into smaller components, then it's a single unit, and I would argue that in that case automated unit testing and automated functional testing would be indistinguishable.
The only time you need to write unit tests is when you care that the output of your program is correct, and will continue to be correct in the future.
If correctness of your code is not important, then it is not necessary to unit test.
I must digress with most answers. Donald Knuth once said in an interview that people tend to test most things when they are unsure of what they are doing or are working in a not-so-comfortable domain.
With that in mind, I say that besides documenting like mcwafflestixlivejournalcom said, unit tests only help you if you are not 100% sure of your code. Taking the example of the addition operator for two ints, I don't care about the overflow if I'm adding two person's ages.
OTOH, unit tests makes you factor out the core logic of your programs (which makes you do things more modular) and helps you test faster than you could ever do if you were testing 'manually'. Not all of us are Donald Knuth after all.
Bear in mind that my answer is for small functionality, like the original poster asked