We have a convention to validate all parameters of constructors and public functions/methods. For mandatory parameters of reference type, we mainly check for non-null and that's the chief validation in constructors, where we set up mandatory dependencies of the type.
The number one reason why we do this is to catch that error early and not get a null reference exception a few hours down the line without knowing where or when the faulty parameter was introduced. As we start transitioning to more and more TDD, some team members feel the validation is redundant.
Uncle Bob, who is a vocal advocate of TDD, strongly advices against doing parameter validation. His main argument seems to be "I have a suite of unit tests that makes sure everything works".
But I can for the life of it just not see in what way unit tests can prevent our developers from calling these methods with bad parameters in production code.
Please, unit testers out there, if you could explain this to me in a rational way with concrete examples, I'd be more than happy to seize this parameter validation!
My answer is "it can't." Basically it sounds like I disagree with Uncle Bob on this (amongst other things).
It's all too easy to imagine a situation where you've unit tested your library code for non-null arguments, and you've unit tested your calling code for a path which happens to provide a null argument to the library without you being aware of it, but which also happens not to cause any problems for that particular path. You can have 100% coverage and actually a pretty good set of tests, and still not notice the problem.
Is everything fine? No, of course it isn't - because you're violating the library contract (don't give me a non-null value) without being aware of it. Can you be comfortable that the only situations in which you're providing a null argument are ones where it won't matter? I don't think so - especially if you weren't even aware that the argument was null.
In my view, public APIs should validate their arguments regardless of whether the calling code and the API itself is unit tested. Problems in calling code should be exposed, and exposed as early as possible.
That's a question I've been asking myself for ages, and still haven't got a satisfying answer to.
But I believe that when it comes to argument validation, you need to distinguish between two cases:
Are you validating the argument to catch logical programming errors?
if (foo == null) throw new ArgumentNullException("foo");
is quite likely an example of that.
Are you validating the argument because it is some external input (supplied by the user, or read from a configuration file, or from a database), which could be invalid and must be rejected?
if (customerDateOfBirth == new DateTime(1900, 1, 1)) throw …;
might be of this type of argument check.
(If you're exposing an API consumed by someone outside your team, point 2 roughly applies as well.)
I suspect that methodologies such as unit testing, design by contract, and to some extent "fail early" focus mostly on the first type of argument validation. That is, they attempt to detect logical programming errors, not invalid input.
If that is the case, then I dare say it doesn't actually matter which method of error detection you follow; each has its own advantages and disadvantages.† In the extreme case (for instance, when you have absolute trust in your abilities to write bug-free code), you could even drop these checks completely.
However, whatever method you choose for detecting logical errors in your code, you still need to validate user input etc., thus the need to distinguish between the two kinds of argument checks.
†) An amateur's incomplete attempt at comparing the relative advantages and disadvantages of Design by Contract, unit testing, and "fail early":
(Though you didn't ask for it... I'll just mention a few key differences.)
Fail early (e.g. explicit argument validation at start of method):
writing basic checks such as guards against null are easy to write
might mix up guards against logical errors and validation of external input with the same syntax
doesn't allow you to test the interaction of methods
does not encourage you to define (and thus think about) your methods' contracts rigorously
Unit testing:
allows you to test code in isolation, without running the actual application, so detecting bugs can be quicker
if a logical error occurs, you won't have to trace the stack to find the cause, because each unit test stands for a specific "use case" of your code.
allows you test more than just single methods, e.g. even the interaction between several objects (think stubs & mocks)
writing easy tests (such as guards against null) is more work than with the "fail early" approach (if you strictly adhere to the Arrange-Act-Assert pattern)
Design by Contract:
forces you to explicitly state the contract of your classes (though this is possible with unit tests, too — just in a different way)
allows you to easily state class invariants (internal conditions that must always hold true)
not as well supported by many programming languages / frameworks as the other approaches
It all depends on the type of application you are developing.
I have spent most of my time writing applications that do not expose public APIs, in this case, the application must be deterministic in a sense that all parameters must and will be different than null. In a nutshell, you should be performing input validation at your system boundaries not to let these invalid inputs sneak into your application which might end up in null references and such. In this kind of application, you have full control of checking your application's input right where you acquire them.
If you are writing public APIs, then not checking for null references is not recommended. Just have a look at all the MSDN class methods that can throw exceptions, all of that happens inside the API as precondition checks, you can read the C# Framework design guidelines for more info.
In my opinion, be it an exposed (or not) API application, having preconditions for your methods is always a good thing (those contracts are documentations for your peers who will work on your code in the future)
I aggree with Uncle Bob on almost everything, but this not this one. I vote for the "fail fast and fail hard"-policy.
This has nothing to do with TDD.
For public APIs, yes, we should do argument checks, as fast as possible.
All constructor argument checks seem completely unnecessary to me, because it's NOT consumed by anyone else outside the team. Why we had null checks? We had no trust in the code that is calling these methods.
So what are public APIs? All public methods? If so, there is no such a thing called internal APIs then I guess. So why use word public then? Why just say all public methods should do null/boundary checks.
I think the root cause of the problem is lacking of trust in our own code and team members, and apparently we are solve the problem in the wrong way.
Related
I am writing a clojure library, and I am thinking on using clojure.spec. Is it good practice to use spec/valid? on functions input at runtime? or should I use destracturing along with type hints? I am concerned, if there will be a performance penalty, and if it's considered bad use of spec, and generally bad practice.
It is always appropriate to check for valid inputs "where appropriate". What does that mean? That is an open question.
In general, the most "dangerous" inputs will be coming from the outside world, at the boundaries of your program. This means input from the web/browser, or perhaps from another service.
The "safest" inputs are deep inside your program, where (presumably) you have more control and confidence. For example, you would (probably) never check the inputs to the + function to ensure each arg was a number. And, even if an invalid arg slipped by (a string, perhaps), the JVM would throw an exception right away.
For sure, you should have maximal checking when running unit tests, so at least you know your code is working properly in the "normal" path. I like to use Plumatic Schema for this as it is easy to have validation always run during unit tests, but is off by default during "production" runs.
Having said that, I often place assert statements at the beginning of functions where a bad input would be difficult to recognize or would really cause hard-to-diagnose problems later on.
Both of these techniques have saved me many times!
Note that even a strongly typed language like Java won't save you here. Typed variables may ensure that an input is a number, but it could be invalid like sending a negative value to the sqrt function. Other functions may have even more restrictive inputs, such as only being valid for prime numbers, etc. Types cannot capture these detailed requirements. Only good design and good testing can prevent these problems.
Types & asserts cannot prevent all problems, but they are like guard rails on the road (or seatbelts & airbags). They may not be able to prevent a crash, but they can provide early warning and reduce the severity of the consequences.
P.S. You can see how I like to balance the trade-off between safety & cost of checking in a libary I wrote. Be sure to see the file _bootstrap.clj which is a trick to turn on the Plumatic Schema tests only during unit tests.
The question might seem a bit weird, but i'll explain it.
Consider following:
We have a service FirstNameValidator, which i created for other developers so they have a consistent way to validate a person's first name. I want to test it, but because the full set of possible inputs is infinite (or very very big), i only test few cases:
Assert.IsTrue(FirstNameValidator.Validate("John"))
Assert.IsFalse(FirstNameValidator.Validate("$$123"))
I also have LastNameValidator, which is 99% identical, and i wrote a test for it too:
Assert.IsTrue(LastNameValidator.Validate("Doe"))
Assert.IsFalse(LastNameValidator.Validate("__%%"))
But later a new structure appeared - PersonName, which consists of first name and last name. We want to validate it too, so i create a PersonNameValidator. Obviously, for reusability i just call FirstNameValidator and LastNameValidator. Everything is fine till i want to write a test for it.
What should i test?
The fact that FirstNameValidator.Validate was actually called with correct argument?
Or i need to create few cases and test them?
That is actually the question - should we test what service is expected to do? It is expected to validate PersonName, how it does it we actually don't care. So we pass few valid and invalid inputs and expect corresponding return values.
Or, maybe, what it actually does? Like it actually just calls other validators, so test that (.net mocking framework allows it).
Unit tests should be acceptance criteria for a properly functioning unit of code...
they should test what the code should and shouldn't do, you will often find corner cases when you are writing tests.
If you refactor code, you often will have to refactor tests... This should be viewed as part of the original effort, and should bring glee to your soul as you have made the product and process an improvement of such magnitude.
of course if this is a library with outside (or internal, depending on company culture) consumers, you have documentation to consider before you are completely done.
edit: also those tests are pretty weak, you should have a definition of what is legal in each, and actually test inclusion and exclusion of at least all of the classes of glyphps... they can still use related code for testing... ie isValidUsername(name,allowsSpace) could work for both first name and whole name depending on if spaces are allowed.
You have formulated your question a bit strangely: Both options that you describe would test that the function behaves as it should - but in each case on a different level of granularity. In one case you would test the behaviour based on the API that is available to a user of the function. Whether and how the function implements its functionality with the help of other functions/components is not relevant. In the second case you test the behaviour in isolation, including the way the function interacts with its dependended-on components.
On a general level it is not possible to say which is better - depending on the circumstances each option may be the best. In general, isolating a piece of software requires usually more effort to implement the tests and makes the tests more fragile against implementation changes. That means, going for isolation should only be done in situations where there are good reasons for it. Before getting to your specific case, I will describe some situations where isolation is recommendable.
With the original depended-on component (DOC), you may not be able to test everything you want. Assume your code does error handling for the case the DOC returns an error code. But, if the DOC can not easily be made to return an error code, you have difficulty to test your error handling. In this case, if you double the DOC you could make the double return an error code, and thus also test your error handling.
The DOC may have non-deterministic or system-specific behaviour. Some examples are random number generators or date and time functions. If this makes testing your functions difficult, it would be an argument to replace the DOC with some double, so you have control over what is provided to your functions.
The DOC may require a very complex setup. Imagine a complex data base or some complex xml document that needs to be provided. For one thing, this can make your setup quite complicated, for another, your tests get fragile and will likely break if the DOC changes (think that the XML schema changes...).
The setup of the DOC or the calls to the DOC are very time consuming (imagine reading data from a network connection, computing the next chess move, solving the TSP, ...). Or, the use of the DOC prolongs compilation or linking significantly. With a double you can possibly shorten the execution or build time significantly, which is the more interesting the more often you are executing / building the tests.
You may not have a working version of the DOC - possibly the DOC is still under development and is not yet available. Then, with doubles you can start testing nevertheless.
The DOC may be immature, such that with the version you have your tests are instable. In such a case it is likely that you lose trust in your test results and start ignoring failing tests.
The DOC itself may have other dependencies which have some of the problems described above.
These criteria can help to come to an informed decision about whether isolation is necessary. Considering your specific example: The way you have described the situation I get the impression that none of the above criteria is fulfilled. Which for me would lead to the conclusion that I would not isolate the function PersonNameValidator from its DOCs FirstNameValidator and LastNameValidator.
In Osherove's great book "The Art of Unit Testing" one of the test anti-patterns is over-specification which is basically the same as testing the internal state of the object instead of some expected output. To my experience, using Isolation frameworks can cause the same unwanted side effects as testing internal behavior because one tends to only implement the behavior necessary to make your stub interact with the object under test. Now if your implementation changes later on (but the contract remains the same), your test will suddenly break because you are expecting some data from the stub which was not implemented.
So what do you think is the best approach to counter this?
1) Implement your stubs/mocks fully, this has the negative side-effect of potentially making your test less readable and also specifying more than necessary to make your test pass.
2) Favor manual, fully implemented fakes.
3) Implement your stubs/fakes so that they make your test just pass, and then deal with the brittleness that this might introduce.
I do not think you should favor manual testing - unless you prefer to test instead of code.
Instead you have another option - if you test the functionality and not the implementation, try to avoid testing private methods (that can be refactored) and in general write less-fragile tests you'll see that using a mocking/isolation framework does not require you to over specify the system nor does it cause your tests to become more fragile.
In a nutshell - writing fragile tests can be done with or without fakes/mocks and vise-versa.
I tend to use mocks instead of stubbed/fake objects. I find them a lot less trouble and they are way better at keeping test code under control because it's not cluttered with all sorts of half baked implementations. They also help to clarify what is being tested.
Another advantage is that I only have to address where the class under test needs something specific from the mock. So I don't have to code where it's not important. As for verification, again I only have to very the calls from the class under test to the mock that I care about and consider important aspects of the test.
I think, the problem is always the same, although it comes in different flavours: If you have tests that somehow cover the internals of a class, then you will break the tests that cover this internal code.
IMHO there are two ways to deal with that:
Your tests only cover the public contract of a class - a test strategy which is widely adopted for that exact reason: You don't have to change your tests as long as the public contract remains constant. Unfortunately, this is not, what you will have when doing Test-driven development.
If your tests come from a TDD process, then they will regularly cover non-public code. This means that they will break if you change the code. The only way to keep things in sync here is to 'fix' the tests together with the code. This means more maintenance during development. There's no recipe to easily deal with that (other than throw away the test, of course...).
My personal 'way out' is think in terms of 'code elements' rather than just code. A code element consists of three parts: Documentation, test, code. So if you change one part of the element, you have to also adjust the other two - otherwise you leave a broken code element behind.
When I write code I only write the functions I need as I need them.
Does this approach also apply to writing tests?
Should I write a test in advance for every use-case I can think of just to play it safe or should I only write tests for a use-case as I come upon it?
I think that when you write a method you should test both expected and potential error paths. This doesn't mean that you should expand your design to encompass every potential use -- leave that for when it's needed, but you should make sure that your tests have defined the expected behavior in the face of invalid parameters or other conditions.
YAGNI, as I understand it, means that you shouldn't develop features that are not yet needed. In that sense, you shouldn't write a test that drives you to develop code that's not needed. I suspect, though, that's not what you are asking about.
In this context I'd be more concerned with whether you should write tests that cover unexpected uses -- for example, errors due passing null or out of range parameters -- or repeating tests that only differ with respect to the data, not the functionality. In the former case, as I indicated above, I would say yes. Your tests will document the expected behavior of your method in the face of errors. This is important information to people who use your method.
In the latter case, I'm less able to give you a definitive answer. You certainly want your tests to remain DRY -- don't write a test that simply repeats another test even if it has different data. Alternatively, you may not discover potential design issues unless you exercise the edge cases of your data. A simple example is a method that computes a sum of two integers: what happens if you pass it maxint as both parameters? If you only have one test, then you may miss this behavior. Obviously, this is related to the previous point. Only you can be sure when a test is really needed or not.
Yes YAGNI absolutely applies to writing tests.
As an example, I, for one, do not write tests to check any Properties. I assume that properties work a certain way, and until I come to one that does something different from the norm, I won't have tests for them.
You should always consider the validity of writing any test. If there is no clear benefit to you in writing the test, then I would advise that you don't. However, this is clearly very subjective, since what you might think is not worth it someone else could think is very worth the effort.
Also, would I write tests to validate input? Absolutely. However, I would do it to a point. Say you have a function with 3 parameters that are ints and it returns a double. How many tests are you going to write around that function. I would use YAGNI here to determine which tests are going to get you a good ROI, and which are useless.
Write the test as you need it. Tests are code. Writing a bunch of (initially failing) tests up front breaks the red/fix/green cycle of TDD, and makes it harder to identify valid failures vs. unwritten code.
You should write the tests for the use cases you are going to implement during this phase of development.
This gives the following benefits:
Your tests help define the functionality of this phase.
You know when you've completed this phase because all of your tests pass.
You should write tests that cover all your code, ideally. Otherwise, the rest of your tests lose value, and you will in the end debug that piece of code repeatedly.
So, no. YAGNI does not include tests :)
There is of course no point in writing tests for use cases you're not sure will get implemented at all - that much should be obvious to anyone.
For use cases you know will get implemented, test cases are subject to diminishing returns, i.e. trying to cover each and every possible obscure corner case is not a useful goal when you can cover all important and critical paths with half the work - assuming, of course, that the cost of overlooking a rarely occurring error is endurable; I would certainly not settle for anything less than 100% code and branch coverage when writing avionics software.
You'll probably get some variance here, but generally, the goal of writing tests (to me) is to ensure that all your code is functioning as it should, without side effects, in a predictable fashion and without defects. In my mind, then, the approach you discuss of only writing tests for use cases as they are come upon does you no real good, and may in fact cause harm.
What if the particular use case for the unit under test that you ignore causes a serious defect in the final software? Has the time spent developing tests bought you anything in this scenario beyond a false sense of security?
(For the record, this is one of the issues I have with using code coverage to "measure" test quality -- it's a measurement that, if low, may give an indication that you're not testing enough, but if high, should not be used to assume that you are rock-solid. Get the common cases tested, the edge cases tested, then consider all the ifs, ands and buts of the unit and test them, too.)
Mild Update
I should note that I'm coming from possibly a different perspective than many here. I often find that I'm writing library-style code, that is, code which will be reused in multiple projects, for multiple different clients. As a result, it is generally impossible for me to say with any certainty that certain use cases simply won't happen. The best I can do is either document that they're not expected (and hence may require updating the tests afterward), or -- and this is my preference :) -- just writing the tests. I often find option #2 is for more livable on a day-to-day basis, simply because I have much more confidence when I'm reusing component X in new application Y. And confidence, in my mind, is what automated testing is all about.
You should certainly hold off writing test cases for functionality you're not going to implement yet. Tests should only be written for existing functionality or functionality you're about to put in.
However, use cases are not the same as functionality. You only need to test the valid use cases that you've identified, but there's going to be a lot of other things that might happen, and you want to make sure those inputs get a reasonable response (which could well be an error message).
Obviously, you aren't going to get all the possible use cases; if you could, there'd be no need to worry about computer security. You should get at least the more plausible ones, and as problems come up you should add them to the use cases to test.
I think the answer here is, as it is in so many places, it depends. If the contract that a function presents states that it does X, and I see that it's got associated unit tests, etc., I'm inclined to think it's a well-tested unit and use it as such, even if I don't use it that exact way elsewhere. If that particular usage pattern is untested, then I might get confusing or hard-to-trace errors. For this reason, I think a test should cover all (or most) of the defined, documented behavior of a unit.
If you choose to test more incrementally, I might add to the doc comments that the function is "only tested for [certain kinds of input], results for other inputs are undefined".
I frequently find myself writing tests, TDD, for cases that I don't expect the normal program flow to invoke. The "fake it 'til you make it" approach has me starting, generally, with a null input - just enough to have an idea in mind of what the function call should look like, what types its parameters will have and what type it will return. To be clear, I won't just send null to the function in my test; I'll initialize a typed variable to hold the null value; that way when Eclipse's Quick Fix creates the function for me, it already has the right type. But it's not uncommon that I won't expect the program normally to send a null to the function. So, arguably, I'm writing a test that I AGN. But if I start with values, sometimes it's too big a chunk. I'm both designing the API and pushing its real implementation from the beginning. So, by starting slow and faking it 'til I make it, sometimes I write tests for cases I don't expect to see in production code.
If you're working in a TDD or XP style, you won't be writing anything "in advance" as you say, you'll be working on a very precise bit of functionality at any given moment, so you'll be writing all the necessary tests in order make sure that bit of functionality works as you intend it to.
Test code is similar with "code" itself, you won't be writing code in advance for every use cases your app has, so why would you write test code in advance ?
You may think this question is like this question asked on StackOverflow earlier. But I am trying to look at things differently.
In TDD, we write tests that include different conditions, criteria, verification code. If a class passes all these tests we are good to go. It is a way of making sure that the class actually does what it's supposed to do and nothing else.
If you follow Bertrand Meyers' book Object Oriented Software Construction word by word, the class itself has internal and external contracts, so that it only does what its supposed to do and nothing else. No external tests required because the to code to ensure contract is followed is the part of the class.
Quick example to make things clear
TDD
Create test to ensure that in all cases a value ranges from (0-100)
Create a class containing a method that passes the test.
DBC
Create a class, create a contract for that member var to range from (0-100), set contract for contract breach, define a method.
I personally like the DBC approach.
Is there a reason why pure DBC is not so popular? Is it the languages or tools or being Agile or is it just me who likes to have code responsible for itself?
If you think I am not thinking right, I would be more than willing to learn.
The main problem with DBC is that in the vast majority of cases, either the contract cannot be formally specified (at least not conveniently), or it cannot be checked with current static analysis tool. Until we get past this point for mainstream languages (not Eiffel), DBC will not give the kind of assurance that people need.
In TDD, tests are written by a human being based on the current natural-text specifications of the method that are (hopefully) well-documented. Thus, a human interprets correctness by writing the test and gets some assurance based on that interpretation.
If you read Sun's guide for writing JavaDocs, it says that the documentation should essentially lay out a contract sufficient to write a test plan. Hence, design by contract is not necessarily mutually exclusive with TDD.
TDD and DbC are two different strategies. DbC permits fail-fast at runtime while TDD act "at compile time" (to be exact it add a new step right after the compilation to run the unit tests).
That's a big advantage of TDD over DbC : it allows to get earlier feedback. When you write code the TDD way, you get the code and its unit-tests at the same time, you can verify it "works" according to what you thought it should, which you encoded in the test. With DbC, you get code with embedded tests, but you still have to exercise it. IMO,this certainly is one reason that Dbc is not so popular.
Other advantages : TDD creates an automatic test suite that allow detecting (read preventing) regressions and make Refactoring safe, so that you can grow your design incrementally. DbC does not offer this possibility.
Now, using DbC to fail-fast can be very helpful, especially when your code interfaced other components or has to rely on external sources, in which case testing the contract can save you hours.
First of all, I am an Eiffel software engineer, so I can speak to the matter from experience.
The premise of TDD vs DbC is incorrect
The two technologies are not at odds with each other, but complementary to each other. The complement has to do with the placement of assertions and purpose.
The purpose of TDD has both components and scope. The basic components of TDD are boolean assertions and object feature (e.g. method) execution. The steps are simple:
Create an object.
Execute some code in a feature.
Make assertions about the state of the data on the object.
Assertions that fail, fail the test. Passing all assertions is the goal.
Like TDD, the contracts of Design-by-Contract have purpose, scope, and components. While TDD is limited to unit-test-time, contracts can live through the entire SDLC (Software Development Life-cycle)! Within the scope of TDD, execution of object methods (features), will execute the contracts. In an Eiffel Studio Auto-test (TDD) setup, one creates an object, makes the call (just like TDD in other languages), but here is where likeness ends.
In Eiffel Studio with Auto-test and Eiffel code with contracts, the purpose changes somewhat. We want to test the Client-Supplier relationship. Our TDD code is pretending to be a Client of our Supplier method on its object. We create our objects and call the methods based on this purpose, and not just simplistic "TDD-ish method testing". Because the calls to our methods (features) have contracts, those contracts will execute as a part of our TDD-ish code in Auto-test. Because this is true, contract assertions (tests) that we place in our code do NOT have to appear in our TDD test code. Our job (as a programmer) is to simply ensure: A) The code + contracts are executed along all N-paths, and B) The code + contracts are executed using all reasonable data types and ranges.
There is perhaps more to write about the TDD-DbC complement relationship, but I won't be boorish with you on the matter. Suffice it to say that TDD and DbC are NOT at odds with other—not by a long shot!
The power of the contracts of DbC beyond where TDD can reach
Now, we can turn our attention to the power of the contracts of Design-by-Contract beyond where TDD can reach!
Contracts live in the code. They are not external to it, but internal. The most powerful bit (beyond their client-supplier contract relationship basis) about contracts is that the compiler is designed to know about them! They are NOT a bolt-on addition to Eiffel! Thus, they participate in every aspect of inheritance (both traditional vertical is-a inheritance and in lateral or horizontal Generics). Moreover, they reach to a place that TDD cannot reach—inside the method (feature).
While TDD can mimic pre-conditions and post-conditions with some ease, TDD cannot reach inside the code and perform loop-invariant contracts, nor can it do periodic spot-check "check" contracts along a block of code as it is executing. This is a powerful logical and qualitative paradigm, and a reality about how design-by-contract works.
Moreover, TDD cannot do class invariants but in the faintest of ways. I have tried my hardest to get my Auto-test code (which is really just Eiffel Studios version of applied-TDD) to do class invariant mimicry. It is not possible. To understand why you would have to know the in's-and-out's of how Eiffel class invariants work. So, for the moment, you will simply have to either take my word for it (or not) that TDD is incapable of this task, that DbC handles so easily, well, and elegantly!
The reach of DbC does not end with the above notions
We noted above that TDD lives at unit-test-time. Contracts, because they are applied in code under the supervision and control of the compiler, apply anywhere that the code can be executed:
Workbench: you, as a programmer, are using the code to see it work (e.g. before TDD-time or in conjunction with TDD-time).
Unit-test: your continuous integration testing, unit-testing, TDD, etc.
Alpha-test: your initial test users will trip over contracts as they run the executable
Beta-test: a wider audience of users will also trip over contracts.
Production: the final executable (or production system) can have continual testing applied through contracts (TDD cannot).
In each of the situations above, one will find that one has control over just which contracts run and from what sources! You can selectively and fine-grainly turn on and off various forms of contracts and control with extreme precision where and when they are applied by the compiler!
And if all of this was not enough, contracts (by design) can do something that no TDD assertion can ever do: tell you where in the call-stack and which client-supplier relationship is broken, and why (which also immediately suggests how to fix it). Why is this true?
TDD assertions are designed to tell you about the results of a code-run (execution) after the fact. TDD assertion can only see as far as the current state of the method under examination. What TDD assertions cannot do from their position on the outside of the code-base is to tell you precisely which call failed and why! You see—your initial TDD call to some method will trigger that method. Many times, that method will call another, and another, and another—sometimes, as the call-stack winds up and down and hither and yon, there is a breakage: Something writes data wrong, does not write it at all, or writes it when it ought not.
TDD is like the police showing up to the crime scene after the murder has already happened. All they have left is forensic clues that they hope will lead them to a suspect and a conviction. But what if we could be there as the crime was taking place? That is the difference between the placement of TDD assertions and contract assertions. Contracts are there to catch the crime in progress and they point directly at the offender as it is committing the offense!
Recap
Let's recap.
TDD is not at odds with DbC.
It is a complement and a cooperative set of technologies, but with different functions and purposes, as well as tools to work with them.
Contract reach further and reveal more about your code when it breaks.
TDD is one form of catalyst for contracts to be executed.
At the end of the day: I want both! After reading all of this (if you survived), I hope you do as well.
Design-by-contract and test-driven development are not mutually exclusive.
Bertrand Meyer's book Object Oriented Software Construction, 2nd Edition doesn't say that you never make mistakes. Indeed, if you look at the chapter "When the contract is broken", it discusses what happens when a function fails to accomplish what its contract states.
The simple fact that you use the DbC technique doesn't make your code correct. Design-by-contract establishes well-defined rules for your code and its users, in the form of contracts. It's helpful, but you can always mess things up anyway, only that you'll probably notice earlier.
Test-driven development will check, from the outside world, black box style, that the public interface of your class behaves correctly.
I think it is best to use both methods in conjunction rather than just one or the other.
It has always seemed to me that fully enforcing a contract within the class and its methods themselves can be impractical.
For example, if a function says it will hash a string by some method and return the hashed string as output, how does the function enforce that the string was hashed correctly? Hash it again and see if they match? Seems silly. Reverse the hash to see if you get the original? Not possible. Rather, you need a set of test cases to ensure that the function behaves correctly.
On the other hand, if your particular implementation requires that your input data be of a certain size, then establishing a contract and enforcing it in your code seems like the best approach.
In my mind TDD is more "inductive". You start with examples (test cases) and your code embodies the general solution to those examples.
DBC seems more "deductive", after gathering requirements you determine object behavior and contracts. You then code the specific implementation of those contracts.
Writing contracts is somewhat difficult, more so than tests that are concrete examples of behavior, this may be part of the reason TDD is more popular than DBC.
I see no reason why both cannot co-exist. It is wonderful to look at a method and know at a glance exactly what the contract is. It is also wonderful to know that I can run my unit tests and know that nothing was broken with my last change. The two techniques are not mutually exclusive. Why design by contract is not more popular is a mystery.
I've used both in the past and found DBC-style less "intrusive". The driver for DBC may be regular application running. For Unit Tests you have to take care of setup because you expect (validate) some responses. For DBC you don't have to. Rules are written in data-independent manner, so no need to setup and mocking around.
More on my experiences with DBC/Python: http://blog.aplikacja.info/2012/04/classic-testing-vs-design-by-contract/
I see Design By Contract as a specification for success/failure in ALL cases, whereas Test Driven Development targets ONE specific case. If the TDD case succeeds, then it is assumed a function is doing it's job, but it doesn't take into account other cases that could cause it to fail.
Design By Contract on the other hand doesn't necessary guarantee the desired answer, only that the answer is "correct." For example, if a function returns is supposed to return a non-null string, the only thing you can assume in the ENSURE is that it will not be null.
But maybe it doesn't not return the string that was expected. There is no way for a contract to be able to determine that, only a Test can show that it was performing according to the specification.
So the two are complementary.
Greg