How much unit-testing before start coding a method/class? - unit-testing

I´m starting (trying at least) to do coding using TDD principles and I have this question: how much tests do I need to write before actually start coding?
Take for example a hypothetically Math class and a method Divide(int a, int b).
a) Do I have to fully test all methods of Math class (Sum, Average, ...) before start coding Math?
b) Do I have to fully test the Divide method, asserting for example for division by zero, before start coding the method?
c) Or I can create a simple test assertion and verify that it fails, write the code and check that it´s OK, reapeating the process for each of the assertions of a method?
I think the option c) is the correct, but I couldn´t find an answer to it (I did some searchs but couldn´t find a definitive answer).

Your option c represents fully by the book TDD.
You write one failing test exercising a feature of the class that you are working on and then write only enough code to make that test pass. Then you do this again, for the next test.
By doing it this way you should then see each new piece of code you write being very focused upon a particular use-case/test and also find that your tests remain distinct in what they cover.
You want to end up working in a red-green-refactor fashion, so that periodically you go back over both your code and your tests for places where you can refactor things into a better design.
Of course, in the real world you may end up writing many red tests, or writing more code than a particular test requires, or even writing code without tests, but that is moving away from TDD and should only be done with caution.
The wikipedia article on this is actually quite good. http://en.wikipedia.org/wiki/Test-driven_development

The first thing you want to do is write a specification for each method you want to implement. In your specification, you need to address as many corner cases as you care about, and define the behavior your method should result in when executing those cases.
Once your specification is complete, you design tests for every part of your specification ensuring that each test is not passing or failing due to corner case conditions. At this point you are ready to code up your function implementation and tests. Once this is complete, you refine your specification/tests/implementation as necessary until the results are exactly what you desire from your implementation.
Then you document everything (particularly your reasoning for handling corner cases).

Like others have mentioned, your option c would be the pure TDD way to do this. The idea is to build your code up in small red-green-refactor increments. A good simple example of this is Robert Martin's Bowling Kata.

Well You can probably write
#Test
public void testDivide() {
Math math = new Math();
int result = math.divide(20,2);
Assert.assertNotNull(result);
}
That's it, now when you run this your test will fail so you fix your Math.divide method. and add cases in the next step
this is a very ideal way but everyone knows it's not always this way.

The definition of "unit" is not universal, sometimes the unit is the class, sometimes it can be a method. So there is no real universal answer.
In this particular case, I would consider the unit to be a method so wouldn't not all methods before to start coding. Instead, I would do things incrementally, methods after methods. This eliminates a).
However, when writing a test for a method, I would write a rigorous test i.e. I would test the passing and the non passing cases, test at the limits, test the special values, etc. When writing a test, you're defining a contract and this contract should include exceptional situation. Thinking of them from the start do help. And for me, the point of having a green light is to be done so I want my test to be exhaustive. And I think that this is b).
If your test is not exhaustive, then you're actually not done with your unit even if the test is passing and I don't really see the point. I guess that this is c).
So my choice would be b).

If you are going for completeness you should design and code your unit tests prior to development and then develop the primary functionality to the created unit tests. The more thorough you are the clearer the scope and better the final quality. If time and functionality allow, I would create tests for each method/function.

Related

How do I write a unit test when the class to test is complicated?

I am trying to employ TDD in writing a backgammon game in C++ using VS 2010.
I have set up CxxTest to write the test cases.
The first class to test is
class Position
{
public:
...
...
bool IsSingleMoveValid(.....)
...
...
}
I 'd like to write a test for the function IsSingleMoveValid(), and I guess the test should prove that the function works correctly. Unfortunately there are so many cases to test and even if I test several cases, some might escape.
What do you suggest ? How does TDD handle these problems ?
A few guidelines:
Test regular cases. In your problem: test legal moves that you KNOW are valid. You can either take the easy way and have only a handful of test cases, or you can write a loop generating all possible legal moves that can occur in your application and test them all.
Test boundary cases. This is not really applicable to your problem, but for testing simple numerical functions of the form f(x) where you know that x has to lie in a range [x_min, x_max), you would typically also test f(x_min-1), f(x_min), f(x_max-1), f(x_max). (It could be relevant for board games if you have an internal board representation with an overflow edge around it)
Test known bugs. If you ever come across a legal move that is not recognized by your IsSingleMoveValid(), you add this as a testcase and then fix your code. It's useful to keep such test cases to guard against future regressions (some future code additions/modifications could re-introduce this bug, and the test will catch it).
The test coverage (percentage of code lines covered by tests) is a target that can be calculated by tools such as gcov You should do your own cost-benefit analysis how thorough you want to test your code. But for something as essential as legal move detection in a game program, I'd suggest you be vigilant here.
Others have already commented on breaking up the tests in smaller subtests. The nomenclature for that is that such isolated functions are tested with unit testing, whereas the collabaration between such functions in higher-level code is tested with integration testing.
Generally, by breaking complex classes into multiple simpler classes, each doing a well-defined task that is easy to test.
If you are writing the tests then the easiest thing to do is to break your IsSingleMoveValid function down into smaller functions and test them individually.
As you can see on Wikipedia, TDD - Test Driven Development means writing the test first.
In your case, it would mean to establish all valid moves and write a test function for them. Then, you write code for each of those breaking test, until all the test pass.
... Unfortunately there are so many cases to test and even if I test several cases, some might escape.
As other said, when a function is too complex it is time for Refactoring!
I strongly suggest you the book Refactoring - Improve the Design of Existing Code from Martin Fowler with contribution of Kent Beck and others. It is both a learning and reference book which makes it very valuable in my opinion.
This is probably the best book on refactoring and it will teach you how to split your function without breaking everything. Also, refactoring is a really important asset for TDD. :)
There is no such thing as "too many cases to test". If the code to handling a set of cases can be written, they need to be thought. If they can be written and are thought, they code that test them can we written as well. In average, for each 10 lines of (testable) code that you write, you can add a constant factor of testing code associated to it.
Of course, the whole trick is knowing how to write code that matches the testable description.
Hence, you need to start by writing a test for all the cases.
if there is a big, let's say for the sake of discussion that you have a countable set of possible cases to test (i.e: that add(n,m) == n+m for all n and m integer), but your actual code is really simple; return n+m. This of course is trivially true but don't miss the point: you don't need to test all the possible moves in the board, TDD aims so that your tests cover all the code (i.e: the tests exercise all the if branches in your code), not necessarily all possible values or combinations of states (which are exponentially big)
a project with 80-90% of line coverage, means that your tests exercise 9 lines out of each 10 lines of your code. In general if there is a bug in your code, it will in the majority of circumstances be evidenced when walking a particular code path.

Unit Testing : what to test / what not to test?

Since a few days ago I've started to feel interested in Unit Testing and TDD in C# and VS2010. I've read blog posts, watched youtube tutorials, and plenty more stuff that explains why TDD and Unit Testing are so good for your code, and how to do it.
But the biggest problem I find is, that I don't know what to check in my tests and what not to check.
I understand that I should check all the logical operations, problems with references and dependencies, but for example, should I create an unit test for a string formatting that's supossed to be user-input? Or is it just wasting my time while I just can check it in the actual code?
Is there any guide to clarify this problem?
In TDD every line of code must be justified by a failing test-case written before the code.
This means that you cannot develop any code without a test-case. If you have a line of code (condition, branch, assignment, expression, constant, etc.) that can be modified or deleted without causing any test to fail, it means this line of code is useless and should be deleted (or you have a missing test to support its existence).
That is a bit extreme, but this is how TDD works. That being said if you have a piece of code and you are wondering whether it should be tested or not, you are not doing TDD correctly. But if you have a string formatting routine or variable incrementation or whatever small piece of code out there, there must be a test case supporting it.
UPDATE (use-case suggested by Ed.):
Like for example, adding an object to a list and creating a test to see if it is really inside or there is a duplicate when the list shouldn't allow them.
Here is a counterexample, you would be surprised how hard it is to spot copy-paste errors and how common they are:
private Set<String> inclusions = new HashSet<String>();
private Set<String> exclusions = new HashSet<String>();
public void include(String item) {
inclusions.add(item);
}
public void exclude(String item) {
inclusions.add(item);
}
On the other hand testing include() and exclude() methods alone is an overkill because they do not represent any use-cases by themselves. However, they are probably part of some business use-case, you should test instead.
Obviously you shouldn't test whether x in x = 7 is really 7 after assignment. Also testing generated getters/setters is an overkill. But it is the easiest code that often breaks. All too often due to copy&paste errors or typos (especially in dynamic languages).
See also:
Mutation testing
Your first few TDD projects are going to probably result in worse design/redesign and take longer to complete as you are learning (at least in my experience). This is why you shouldn't jump into using TDD on a large critical project.
My advice is to use "pure" TDD (acceptance/unit test everything test-first) on a few small projects (100-10,000 LOC). Either do the side projects on your own or if you don't code in your free time, use TDD on small internal utility programs for your job.
After you do "pure" TDD on about 6-12 projects, you will start to understand how TDD affects design and learn how to design for testability. Once you know how to design for testability, you will need to TDD less and maximize the ROI of unit, regression, acceptance, etc. tests rather than test everything up front.
For me, TDD is more of teaching method for good code design than a practical methodology. However, I still TDD logic code and unit test instead of debug.
There is no simple answer to this question. There is the law of diminishing returns in action, so achieving perfect coverage is seldom worth it. Knowing what to test is a thing of experience, not rules. It’s best to consciously evaluate the process as you go. Did something break? Was it feasible to test? If not, is it possible to rewrite the code to make it more testable? Is it worth it to always test for such cases in the future?
If you split your code into models, views and controllers, you’ll find that most of the critical code is in the models, and those should be fairly testable. (That’s one of the main points of MVC.) If a piece of code is critical, I test it, even if it means that I would have to rewrite it to make it more testable. If a piece of code is easy to get wrong or get broken by future updates, it gets a test. I seldom test controllers and views, as it’s not proving worth the trouble for me.
The way I see it all of your code falls into one of three buckets:
Code that is easy to test: This includes your own deterministic public methods.
Code that is difficult to test: This includes GUI, non-deterministic methods, private methods, and methods with complex setup.
Code that you don't want to test: This includes 3rd party code, and code that is difficult to test and not worth the effort.
Of the three, you should focus on testing the easy code. The difficult to test code should be refactored so that into two parts: code that you don't want to test and easy code. And of course, you should test the refactored easy code.
I think you should only unit test entry points to behavior of the system. This include public methods, public accessors and public fields, but not constants (constant fields, enums, methods, etc.). It also includes any code which directly deals with IO, I explain why further below.
My reasoning is as follows:
Everything that's public is basically an entry point to a behavior of the system. A unit test should therefore be written that guarantees that the expected behavior of that entry point works as required. You shouldn't test all possible ways of calling the entry point, only the ones that you explicitly require. Your unit tests are therefore also the specs of what behavior your system supports and your documentation of how to use it.
Things that are not public can basically be deleted/re-factored at will with no impact to the behavior of the system. If you were to test those, you'd create a hard dependency from your unit test to that code, which would prevent you from doing refactoring on it. That's why you should not test anything else but public methods, fields and accessors.
Constants by design are not behavior, but axioms. A unit test that verifies a constant is itself a constant, so it would only be duplicated code and useless effort to write a test for constants.
So to answer your specific example:
should I create an unit test for a string formatting that's supossed
to be user-input?
Yes, absolutely. All methods which receive or send external input/output (which can be summed up as receiving IO), should be unit tested. This is probably the only case where I'd say non-public things that receive IO should also be unit tested. That's because I consider IO to be a public entry. Anything that's an entry point to an external actor I consider public.
So unit test public methods, public fields, public accessors, even when those are static constructs and also unit test anything which receives or sends data from an external actor, be it a user, a database, a protocol, etc.
NOTE: You can write temporary unit tests on non public things as a way for you to help make sure your implementation works. This is more of a way to help you figure out how to implement it properly, and to make sure your implementation works as you intend. After you've tested that it works though, you should delete the unit test or disable it from your test suite.
Kent Beck, in Extreme Programming Explained, said you only need to test the things that need to work in production.
That's a brusque way of encapsulating both test-driven development, where every change in production code is supported by a test that fails when the change is not present; and You Ain't Gonna Need It, which says there's no value in creating general-purpose classes for applications that only deal with a couple of specific cases.
I think you have to change your point of view.
In a pure form TDD requires the red-green-refactor workflow:
write test (it must fail) RED
write code to satisfy test GREEN
refactor your code
So the question "What I have to test?" has a response like: "You have to write a test that correspond to a feature or a particular requirements".
In this way you get must code coverage and also a better code design (remember that TDD stands also for Test Driven "Design").
Generally speaking you have to test ALL public method/interfaces.
should I create an unit test for a string formatting that's supossed
to be user-input? Or is it just wasting my time while I just can check
it in the actual code?
Not sure I understand what you mean, but the tests you write in TDD are supposed to test your production code. They aren't tests that check user input.
To put it another way, there can be TDD unit tests that test the user input validation code, but there can't be TDD unit tests that validate the user input itself.

Prove correctness of unit test

I'm creating a graph framework for learning purposes. I'm using a TDD approach, so I'm writing a lot of unit tests. However, I'm still figuring out how to prove the correctness of my unit tests
For example, I have this class (not including the implementation, and I have simplified it)
public class SimpleGraph(){
//Returns true on success
public boolean addEdge(Vertex v1, Vertex v2) { ... }
//Returns true on sucess
public boolean addVertex(Vertex v1) { ... }
}
I also have created this unit tests
#Test
public void SimpleGraph_addVertex_noSelfLoopsAllowed(){
SimpleGraph g = new SimpleGraph();
Vertex v1 = new Vertex('Vertex 1');
actual = g.addVertex(v1);
boolean expected = false;
boolean actual = g.addEdge(v1,v1);
Assert.assertEquals(expected,actual);
}
Okay, awesome it works. There is only one crux here, I have proved that the functions work for this case only. However, in my graph theory courses, all I'm doing is proving theorems mathematically (induction, contradiction etc. etc.).
So I was wondering is there a way I can prove my unit tests mathematically for correctness? So is there a good practice for this. So we're testing the unit for correctness, instead of testing it for one certain outcome.
No. Unit tests don't attempt to prove correctness in the general case. They should test specific examples. The idea is to pick enough representative examples that if there is an error it will probably be found by one or more of the tests, but you can't be sure to catch all errors this way. For example if you were unit testing an add function you might test some positive numbers, some negative, some large numbers and some small, but using this approach alone you'd be lucky to find the case where this implementation doesn't work:
int add(int a, int b) {
if (a == 1234567 && b == 2461357) { return 42; }
return a + b;
}
You would however be able to spot this error by combining unit testing and code coverage. However even with 100% code coverage there can be logical errors which didn't get caught by any tests.
It is possible to prove code for correctness. It is called formal verification, but it's not what unit tests are for. It's also expensive to do for all but the most simple software so it is rarely done in practice.
Probably not. Unit tests approach the problem by exhaustive testing:
You verify that your test works by writing the test before implementing the behavior.
Then you see that the test fails.
Then you implement the behavior to pass that test, and only that test. Never write code that is not needed to implement a test.
Really, what you're proving is that one case of your algorithm is working, eg you're proving that a subset of your execution paths are valid. Testing will never help you prove correctness in the strict mathematical sense (except for very simple cases). In the general case, this is impossible. Testing is a pragmatic approach to this problem where we try to show representative cases are correct (boundary values, values somewhere in the middle, etc.) and hope that that works.
Still, some tools such as findbugs etc. manage to give you conservative proof of some properties of your code.
If you would like formal proof of your stuff, there's always Coq, Agda and similar languages, but that's a hell of a stretch from writing a unit test :)
One great, simple introduction to testing vs proofs is Abstract Interpretation in a Nutshell Patrick Cousot.
There are tool for formally specifying how your code operates and even tools to proof that they work in that way, but they are far away from unit testing area.
Two examples from the Java world are JML and ESC/Java2
NASA has a whole department dedicated to formal methods.
My 2 cents. Look at it this way: you think you wrote a function that does something, but what you really did was writing a function that you think it does something. If you cannot write a mathematically proof of what the code does, you can as well treat the function as a hypothesis; you cannot be sure it will be always correct, but at least it is falsiable.
And that's why we write unit testing (note: just other functions, prone to have bugs, sigh), to try to falsify the hypothesis finding counter-examples with which it does not hold.
If you want to go for correctness properties of your code, you can, as already mentioned in previous posts, apply some formal verification tools. This is not an easy thing to do, but may still be doable. There are tools like the KeY system capable of proving first-order properties for Java code. KeY has some problems with things like generics, floats and parallelism, but works quite well for most concepts of the Java language. Moreover, you can automatically create test cases with KeY based on the proof tree.
If you are familiar with JML (this is not hard to learn, basically Java with a bit of logic), you could try out this approach. For really critical parts of your systems, verification might really be something to think about; for other parts of the code, testing some of the possible traces with unit testing might already be sufficient, for example to avoid regression problems.

When doing TDD, why should I do "just enough" to get a test passing?

Looking at posts like this and others, it seems that the correct way to do TDD is to write a test for a feature, get just that feature to pass, and then add another test and refactor as necessary until it passes, then repeat.
My question is: why is this approach used? I completely understand the write tests first idea, because it helps your design. But why wouldn't I create all tests for a specific function, and then implement that function all at once until all tests pass?
The approach comes from the Extreme Programming principal of You Aren't Going to Need It. If you actually write a single test and then the code that makes it pass then repeating that process you usually find that you write just enough to get things working. You don't invent new features that are not needed. You don't handle corner cases that don't exist.
Try an experiment. Write out the list of tests you think you need. Set it aside. Then go with the one test at a time approach. See if the lists differ and why. When I do that I almost always end up with fewer tests. I almost always find that I invented a case that I didn't need if I do it the all the tests first way.
For me, it is about "thought burden." If I have all of the possible behaviors to worry about at once, my brain is strained. If I approach them one at a time, I can give full attention to solving the immediate problem.
I believe this derives from the principle of "YAGNI" ("You're Ain't Gonna Need It")(*), which states that classes should be as simple as necessary, with no extra features. Hence when you need a feature, you write a test for it, then you write the feature, then you stop. If you wrote a number of tests first, clearly you would be merely speculating on what your API would need to be at some point in the future.
(*) I generally translate that as "You are too stupid to know what will be needed in the future", but that's another topic......
imho it reduces the chance of over engineering the piece of code you are writing.
Its just easier to add unnecessary code when you are looking at different usage scenarios.
Dan North has suggested that there is no such thing as test-driven design because the design is not really driven out by testing -- that these unit tests only become tests once functionality is implemented, but during the design phase you are really designing by example.
This makes sense -- your tests are setting up a range of sample data and conditions with which the system under test is going to operate, and you drive out design based on these example scenarios.
Some of the other answers suggest that this is based on YAGNI. This is partly true.
Beyond that, though, there is the issue of complexity. As is often stated, programming is about managing complexity -- breaking things down into comprehensible units.
If you write 10 tests to cover cases where param1 is null, param2 is null, string1 is empty, int1 is negative, and the current day of the week is a weekend, and then go to implement that, you are having to juggle a lot of complexity at once. This opens up space to introduce bugs, and it becomes very difficult to sort out why tests are failing.
On the other hand, if you write the first test to cover an empty string1, you barely have to think about the implementation. Once the test is passing, you move on to a case where the current day is a weekend. You look at the existing code and it becomes obvious where the logic should go. You run tests and if the first test is now failing, you know that you broke it while implementing the day-of-the-week thing. I'd even recommend that you commit source between tests so that if you break something you can always revert to a passing state and try again.
Doing just a little at a time and then verifying that it works dramatically reduces the space for the introduction of defects, and when your tests fail after implementation you have changed so little code that it is very easy to identify the defect and correct it, because you know that the existing code was already working properly.
This is a great question. You need to find a balance between writing all tests in the universe of possible tests, and the most likely user scenarios. One test is, IMHO, not enough, and I typically like to write 3 or 4 tests which represent the most common uses of the feature. I also like to write a best case test and a worst case test as well.
Writing many tests helps you to anticipate and understand the potential use of your feature.
I believe TDD advocates writing one test at a time because it forces you to think in terms of the principle of doing the simplest thing that could possibly work at each step of development.
I think the article you sent is exactly the answer. If you write all the tests first and all of the scenarios first, you will probably write your code to handle all of those scenarios at once and most of the time you probably end up with code that is fairly complex to handle all of these.
On the other hand, if you go one at a time, you will end up refactoring your existing code each time to end up with code probably as simple as it can be for all the scenarios.
Like in the case of the link you gave in your question, had they written all the tests first, I am pretty sure they would have not ended up with a simple if/else statement, but probably a fairly complex recursive piece of code.
The reason behind the principle is simple. How practical it is to stick to is a separate question.
The reason is that if you are writing more code that what is needed to pass the current test you are writing code that is, by definition, untested. (It's nothing to do with YAGNI.)
If you write the next test to "catch up" with the production code then you've just written a test that you haven't seen fail. The test may be called "TestNextFeature" but it may as well return true for all the evidence you have on it.
TDD is all about making sure that all code - production and tests - is tested and that all those pesky "but I'm sure I wrote it right" bugs don't get into the code.
I would do as you suggest. Write several tests for a specific function, implement the function, and ensure that all of the tests for this function pass. This ensures that you understand the purpose and usage of the function separately from your implementation of it.
If you need to do a lot more implementation wise than what is tested by your unit tests, then your unit tests are likely not comprehensive enough.
I think part of that idea is to keep simplicity, keep to designed/planned features, and make sure that your tests are sufficient.
Lots of good answers above - YAGNI is the first answer that jumps to mind.
The other important thing about the 'just get the test passing' guideline though, is that TDD is actually a three stage process:
Red > Green > Refactor
Frequently revisiting the final part, the refactoring, is where a lot of the value of TDD is delivered in terms of cleaner code, better API design, and more confidence in the software. You need to refactor in really small short blocks though lest the task become too big.
It is hard to get into this habit, but stick with it, as it's an oddly satisfying way to work once you get into the cycle.

How deep are your unit tests?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
The thing I've found about TDD is that its takes time to get your tests set up and being naturally lazy I always want to write as little code as possible. The first thing I seem do is test my constructor has set all the properties but is this overkill?
My question is to what level of granularity do you write you unit tests at?
..and is there a case of testing too much?
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.
Write unit tests for things you expect to break, and for edge cases. After that, test cases should be added as bug reports come in - before writing the fix for the bug. The developer can then be confident that:
The bug is fixed;
The bug won't reappear.
Per the comment attached - I guess this approach to writing unit tests could cause problems, if lots of bugs are, over time, discovered in a given class. This is probably where discretion is helpful - adding unit tests only for bugs that are likely to re-occur, or where their re-occurrence would cause serious problems. I've found that a measure of integration testing in unit tests can be helpful in these scenarios - testing code higher up codepaths can cover the codepaths lower down.
Everything should be made as simple as
possible, but not simpler. - A. Einstein
One of the most misunderstood things about TDD is the first word in it. Test. That's why BDD came along. Because people didn't really understand that the first D was the important one, namely Driven. We all tend to think a little bit to much about the Testing, and a little bit to little about the driving of design. And I guess that this is a vague answer to your question, but you should probably consider how to drive your code, instead of what you actually are testing; that is something a Coverage-tool can help you with. Design is a quite bigger and more problematic issue.
To those who propose testing "everything": realise that "fully testing" a method like int square(int x) requires about 4 billion test cases in common languages and typical environments.
In fact, it's even worse than that: a method void setX(int newX) is also obliged not to alter the values of any other members besides x -- are you testing that obj.y, obj.z, etc. all remain unchanged after calling obj.setX(42);?
It's only practical to test a subset of "everything." Once you accept this, it becomes more palatable to consider not testing incredibly basic behaviour. Every programmer has a probability distribution of bug locations; the smart approach is to focus your energy on testing regions where you estimate the bug probability to be high.
The classic answer is "test anything that could possibly break". I interpret that as meaning that testing setters and getters that don't do anything except set or get is probably too much testing, no need to take the time. Unless your IDE writes those for you, then you might as well.
If your constructor not setting properties could lead to errors later, then testing that they are set is not overkill.
I write tests to cover the assumptions of the classes I will write. The tests enforce the requirements. Essentially, if x can never be 3, for example, I'm going to ensure there is a test that covers that requirement.
Invariably, if I don't write a test to cover a condition, it'll crop up later during "human" testing. I'll certainly write one then, but I'd rather catch them early. I think the point is that testing is tedious (perhaps) but necessary. I write enough tests to be complete but no more than that.
Part of the problem with skipping simple tests now is in the future refactoring could make that simple property very complicated with lots of logic. I think the best idea is that you can use Tests to verify requirements for the module. If when you pass X you should get Y back, then that's what you want to test. Then when you change the code later on, you can verify that X gives you Y, and you can add a test for A gives you B, when that requirement is added later on.
I've found that the time I spend during initial development writing tests pays off in the first or second bug fix. The ability to pick up code you haven't looked at in 3 months and be reasonably sure your fix covers all the cases, and "probably" doesn't break anything is hugely valuable. You also will find that unit tests will help triage bugs well beyond the stack trace, etc. Seeing how individual pieces of the app work and fail gives huge insight into why they work or fail as a whole.
In most instances, I'd say, if there is logic there, test it. This includes constructors and properties, especially when more than one thing gets set in the property.
With respect to too much testing, it's debatable. Some would say that everything should be tested for robustness, others say that for efficient testing, only things that might break (i.e. logic) should be tested.
I'd lean more toward the second camp, just from personal experience, but if somebody did decide to test everything, I wouldn't say it was too much... a little overkill maybe for me, but not too much for them.
So, No - I would say there isn't such a thing as "too much" testing in the general sense, only for individuals.
Test Driven Development means that you stop coding when all your tests pass.
If you have no test for a property, then why should you implement it? If you do not test/define the expected behaviour in case of an "illegal" assignment, what should the property do?
Therefore I'm totally for testing every behaviour a class should exhibit. Including "primitive" properties.
To make this testing easier, I created a simple NUnit TestFixture that provides extension points for setting/getting the value and takes lists of valid and invalid values and has a single test to check whether the property works right. Testing a single property could look like this:
[TestFixture]
public class Test_MyObject_SomeProperty : PropertyTest<int>
{
private MyObject obj = null;
public override void SetUp() { obj = new MyObject(); }
public override void TearDown() { obj = null; }
public override int Get() { return obj.SomeProperty; }
public override Set(int value) { obj.SomeProperty = value; }
public override IEnumerable<int> SomeValidValues() { return new List() { 1,3,5,7 }; }
public override IEnumerable<int> SomeInvalidValues() { return new List() { 2,4,6 }; }
}
Using lambdas and attributes this might even be written more compactly. I gather MBUnit has even some native support for things like that. The point though is that the above code captures the intent of the property.
P.S.: Probably the PropertyTest should also have a way of checking that other properties on the object didn't change. Hmm .. back to the drawing board.
I make unit test to reach the maximum feasible coverage. If I cannot reach some code, I refactor until the coverage is as full as possible
After finished to blinding writing test, I usually write one test case reproducing each bug
I'm used to separate between code testing and integration testing. During integration testing, (which are also unit test but on groups of components, so not exactly what for unit test are for) I'll test for the requirements to be implemented correctly.
So the more I drive my programming by writing tests, the less I worry about the level of granuality of the testing. Looking back it seems I am doing the simplest thing possible to achieve my goal of validating behaviour. This means I am generating a layer of confidence that my code is doing what I ask to do, however this is not considered as absolute guarantee that my code is bug free. I feel that the correct balance is to test standard behaviour and maybe an edge case or two then move on to the next part of my design.
I accept that this will not cover all bugs and use other traditional testing methods to capture these.
Generally, I start small, with inputs and outputs that I know must work. Then, as I fix bugs, I add more tests to ensure the things I've fixed are tested. It's organic, and works well for me.
Can you test too much? Probably, but it's probably better to err on the side of caution in general, though it'll depend on how mission-critical your application is.
I think you must test everything in your "core" of your business logic. Getter ans Setter too because they could accept negative value or null value that you might do not want to accept. If you have time (always depend of your boss) it's good to test other business logic and all controller that call these object (you go from unit test to integration test slowly).
I don't unit tests simple setter/getter methods that have no side effects. But I do unit test every other public method. I try to create tests for all the boundary conditions in my algorthims and check the coverage of my unit tests.
Its a lot of work but I think its worth it. I would rather write code (even testing code) than step through code in a debugger. I find the code-build-deploy-debug cycle very time consuming and the more exhaustive the unit tests I have integrated into my build the less time I spend going through that code-build-deploy-debug cycle.
You didn't say why architecture you are coding too. But for Java I use Maven 2, JUnit, DbUnit, Cobertura, & EasyMock.
The more I read about it the more I think some unit tests are just like some patterns: A smell of insufficient languages.
When you need to test whether your trivial getter actually returns the right value, it is because you may intermix getter name and member variable name. Enter 'attr_reader :name' of ruby, and this can't happen any more. Just not possible in java.
If your getter ever gets nontrivial you can still add a test for it then.
Test the source code that makes you worried about it.
Is not useful to test portions of code in which you are very very confident with, as long as you don't make mistakes in it.
Test bugfixes, so that it is the first and last time you fix a bug.
Test to get confidence of obscure code portions, so that you create knowledge.
Test before heavy and medium refactoring, so that you don't break existing features.
This answer is more for figuring out how many unit tests to use for a given method you know you want to unit test due to its criticality/importance. Using Basis Path Testing technique by McCabe, you could do the following to quantitatively have better code coverage confidence than simple "statement coverage" or "branch coverage":
Determine Cyclomatic Complexity value of your method that you want to unit test (Visual Studio 2010 Ultimate for example can calculate this for you with static analysis tools; otherwise, you can calculate it by hand via flowgraph method - http://users.csc.calpoly.edu/~jdalbey/206/Lectures/BasisPathTutorial/index.html)
List the basis set of independent paths that flow thru your method - see link above for flowgraph example
Prepare unit tests for each independent basis path determined in step 2