How deep are your unit tests? - unit-testing

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
The thing I've found about TDD is that its takes time to get your tests set up and being naturally lazy I always want to write as little code as possible. The first thing I seem do is test my constructor has set all the properties but is this overkill?
My question is to what level of granularity do you write you unit tests at?
..and is there a case of testing too much?

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.

Write unit tests for things you expect to break, and for edge cases. After that, test cases should be added as bug reports come in - before writing the fix for the bug. The developer can then be confident that:
The bug is fixed;
The bug won't reappear.
Per the comment attached - I guess this approach to writing unit tests could cause problems, if lots of bugs are, over time, discovered in a given class. This is probably where discretion is helpful - adding unit tests only for bugs that are likely to re-occur, or where their re-occurrence would cause serious problems. I've found that a measure of integration testing in unit tests can be helpful in these scenarios - testing code higher up codepaths can cover the codepaths lower down.

Everything should be made as simple as
possible, but not simpler. - A. Einstein
One of the most misunderstood things about TDD is the first word in it. Test. That's why BDD came along. Because people didn't really understand that the first D was the important one, namely Driven. We all tend to think a little bit to much about the Testing, and a little bit to little about the driving of design. And I guess that this is a vague answer to your question, but you should probably consider how to drive your code, instead of what you actually are testing; that is something a Coverage-tool can help you with. Design is a quite bigger and more problematic issue.

To those who propose testing "everything": realise that "fully testing" a method like int square(int x) requires about 4 billion test cases in common languages and typical environments.
In fact, it's even worse than that: a method void setX(int newX) is also obliged not to alter the values of any other members besides x -- are you testing that obj.y, obj.z, etc. all remain unchanged after calling obj.setX(42);?
It's only practical to test a subset of "everything." Once you accept this, it becomes more palatable to consider not testing incredibly basic behaviour. Every programmer has a probability distribution of bug locations; the smart approach is to focus your energy on testing regions where you estimate the bug probability to be high.

The classic answer is "test anything that could possibly break". I interpret that as meaning that testing setters and getters that don't do anything except set or get is probably too much testing, no need to take the time. Unless your IDE writes those for you, then you might as well.
If your constructor not setting properties could lead to errors later, then testing that they are set is not overkill.

I write tests to cover the assumptions of the classes I will write. The tests enforce the requirements. Essentially, if x can never be 3, for example, I'm going to ensure there is a test that covers that requirement.
Invariably, if I don't write a test to cover a condition, it'll crop up later during "human" testing. I'll certainly write one then, but I'd rather catch them early. I think the point is that testing is tedious (perhaps) but necessary. I write enough tests to be complete but no more than that.

Part of the problem with skipping simple tests now is in the future refactoring could make that simple property very complicated with lots of logic. I think the best idea is that you can use Tests to verify requirements for the module. If when you pass X you should get Y back, then that's what you want to test. Then when you change the code later on, you can verify that X gives you Y, and you can add a test for A gives you B, when that requirement is added later on.
I've found that the time I spend during initial development writing tests pays off in the first or second bug fix. The ability to pick up code you haven't looked at in 3 months and be reasonably sure your fix covers all the cases, and "probably" doesn't break anything is hugely valuable. You also will find that unit tests will help triage bugs well beyond the stack trace, etc. Seeing how individual pieces of the app work and fail gives huge insight into why they work or fail as a whole.

In most instances, I'd say, if there is logic there, test it. This includes constructors and properties, especially when more than one thing gets set in the property.
With respect to too much testing, it's debatable. Some would say that everything should be tested for robustness, others say that for efficient testing, only things that might break (i.e. logic) should be tested.
I'd lean more toward the second camp, just from personal experience, but if somebody did decide to test everything, I wouldn't say it was too much... a little overkill maybe for me, but not too much for them.
So, No - I would say there isn't such a thing as "too much" testing in the general sense, only for individuals.

Test Driven Development means that you stop coding when all your tests pass.
If you have no test for a property, then why should you implement it? If you do not test/define the expected behaviour in case of an "illegal" assignment, what should the property do?
Therefore I'm totally for testing every behaviour a class should exhibit. Including "primitive" properties.
To make this testing easier, I created a simple NUnit TestFixture that provides extension points for setting/getting the value and takes lists of valid and invalid values and has a single test to check whether the property works right. Testing a single property could look like this:
[TestFixture]
public class Test_MyObject_SomeProperty : PropertyTest<int>
{
private MyObject obj = null;
public override void SetUp() { obj = new MyObject(); }
public override void TearDown() { obj = null; }
public override int Get() { return obj.SomeProperty; }
public override Set(int value) { obj.SomeProperty = value; }
public override IEnumerable<int> SomeValidValues() { return new List() { 1,3,5,7 }; }
public override IEnumerable<int> SomeInvalidValues() { return new List() { 2,4,6 }; }
}
Using lambdas and attributes this might even be written more compactly. I gather MBUnit has even some native support for things like that. The point though is that the above code captures the intent of the property.
P.S.: Probably the PropertyTest should also have a way of checking that other properties on the object didn't change. Hmm .. back to the drawing board.

I make unit test to reach the maximum feasible coverage. If I cannot reach some code, I refactor until the coverage is as full as possible
After finished to blinding writing test, I usually write one test case reproducing each bug
I'm used to separate between code testing and integration testing. During integration testing, (which are also unit test but on groups of components, so not exactly what for unit test are for) I'll test for the requirements to be implemented correctly.

So the more I drive my programming by writing tests, the less I worry about the level of granuality of the testing. Looking back it seems I am doing the simplest thing possible to achieve my goal of validating behaviour. This means I am generating a layer of confidence that my code is doing what I ask to do, however this is not considered as absolute guarantee that my code is bug free. I feel that the correct balance is to test standard behaviour and maybe an edge case or two then move on to the next part of my design.
I accept that this will not cover all bugs and use other traditional testing methods to capture these.

Generally, I start small, with inputs and outputs that I know must work. Then, as I fix bugs, I add more tests to ensure the things I've fixed are tested. It's organic, and works well for me.
Can you test too much? Probably, but it's probably better to err on the side of caution in general, though it'll depend on how mission-critical your application is.

I think you must test everything in your "core" of your business logic. Getter ans Setter too because they could accept negative value or null value that you might do not want to accept. If you have time (always depend of your boss) it's good to test other business logic and all controller that call these object (you go from unit test to integration test slowly).

I don't unit tests simple setter/getter methods that have no side effects. But I do unit test every other public method. I try to create tests for all the boundary conditions in my algorthims and check the coverage of my unit tests.
Its a lot of work but I think its worth it. I would rather write code (even testing code) than step through code in a debugger. I find the code-build-deploy-debug cycle very time consuming and the more exhaustive the unit tests I have integrated into my build the less time I spend going through that code-build-deploy-debug cycle.
You didn't say why architecture you are coding too. But for Java I use Maven 2, JUnit, DbUnit, Cobertura, & EasyMock.

The more I read about it the more I think some unit tests are just like some patterns: A smell of insufficient languages.
When you need to test whether your trivial getter actually returns the right value, it is because you may intermix getter name and member variable name. Enter 'attr_reader :name' of ruby, and this can't happen any more. Just not possible in java.
If your getter ever gets nontrivial you can still add a test for it then.

Test the source code that makes you worried about it.
Is not useful to test portions of code in which you are very very confident with, as long as you don't make mistakes in it.
Test bugfixes, so that it is the first and last time you fix a bug.
Test to get confidence of obscure code portions, so that you create knowledge.
Test before heavy and medium refactoring, so that you don't break existing features.

This answer is more for figuring out how many unit tests to use for a given method you know you want to unit test due to its criticality/importance. Using Basis Path Testing technique by McCabe, you could do the following to quantitatively have better code coverage confidence than simple "statement coverage" or "branch coverage":
Determine Cyclomatic Complexity value of your method that you want to unit test (Visual Studio 2010 Ultimate for example can calculate this for you with static analysis tools; otherwise, you can calculate it by hand via flowgraph method - http://users.csc.calpoly.edu/~jdalbey/206/Lectures/BasisPathTutorial/index.html)
List the basis set of independent paths that flow thru your method - see link above for flowgraph example
Prepare unit tests for each independent basis path determined in step 2

Related

Unit Testing : what to test / what not to test?

Since a few days ago I've started to feel interested in Unit Testing and TDD in C# and VS2010. I've read blog posts, watched youtube tutorials, and plenty more stuff that explains why TDD and Unit Testing are so good for your code, and how to do it.
But the biggest problem I find is, that I don't know what to check in my tests and what not to check.
I understand that I should check all the logical operations, problems with references and dependencies, but for example, should I create an unit test for a string formatting that's supossed to be user-input? Or is it just wasting my time while I just can check it in the actual code?
Is there any guide to clarify this problem?
In TDD every line of code must be justified by a failing test-case written before the code.
This means that you cannot develop any code without a test-case. If you have a line of code (condition, branch, assignment, expression, constant, etc.) that can be modified or deleted without causing any test to fail, it means this line of code is useless and should be deleted (or you have a missing test to support its existence).
That is a bit extreme, but this is how TDD works. That being said if you have a piece of code and you are wondering whether it should be tested or not, you are not doing TDD correctly. But if you have a string formatting routine or variable incrementation or whatever small piece of code out there, there must be a test case supporting it.
UPDATE (use-case suggested by Ed.):
Like for example, adding an object to a list and creating a test to see if it is really inside or there is a duplicate when the list shouldn't allow them.
Here is a counterexample, you would be surprised how hard it is to spot copy-paste errors and how common they are:
private Set<String> inclusions = new HashSet<String>();
private Set<String> exclusions = new HashSet<String>();
public void include(String item) {
inclusions.add(item);
}
public void exclude(String item) {
inclusions.add(item);
}
On the other hand testing include() and exclude() methods alone is an overkill because they do not represent any use-cases by themselves. However, they are probably part of some business use-case, you should test instead.
Obviously you shouldn't test whether x in x = 7 is really 7 after assignment. Also testing generated getters/setters is an overkill. But it is the easiest code that often breaks. All too often due to copy&paste errors or typos (especially in dynamic languages).
See also:
Mutation testing
Your first few TDD projects are going to probably result in worse design/redesign and take longer to complete as you are learning (at least in my experience). This is why you shouldn't jump into using TDD on a large critical project.
My advice is to use "pure" TDD (acceptance/unit test everything test-first) on a few small projects (100-10,000 LOC). Either do the side projects on your own or if you don't code in your free time, use TDD on small internal utility programs for your job.
After you do "pure" TDD on about 6-12 projects, you will start to understand how TDD affects design and learn how to design for testability. Once you know how to design for testability, you will need to TDD less and maximize the ROI of unit, regression, acceptance, etc. tests rather than test everything up front.
For me, TDD is more of teaching method for good code design than a practical methodology. However, I still TDD logic code and unit test instead of debug.
There is no simple answer to this question. There is the law of diminishing returns in action, so achieving perfect coverage is seldom worth it. Knowing what to test is a thing of experience, not rules. It’s best to consciously evaluate the process as you go. Did something break? Was it feasible to test? If not, is it possible to rewrite the code to make it more testable? Is it worth it to always test for such cases in the future?
If you split your code into models, views and controllers, you’ll find that most of the critical code is in the models, and those should be fairly testable. (That’s one of the main points of MVC.) If a piece of code is critical, I test it, even if it means that I would have to rewrite it to make it more testable. If a piece of code is easy to get wrong or get broken by future updates, it gets a test. I seldom test controllers and views, as it’s not proving worth the trouble for me.
The way I see it all of your code falls into one of three buckets:
Code that is easy to test: This includes your own deterministic public methods.
Code that is difficult to test: This includes GUI, non-deterministic methods, private methods, and methods with complex setup.
Code that you don't want to test: This includes 3rd party code, and code that is difficult to test and not worth the effort.
Of the three, you should focus on testing the easy code. The difficult to test code should be refactored so that into two parts: code that you don't want to test and easy code. And of course, you should test the refactored easy code.
I think you should only unit test entry points to behavior of the system. This include public methods, public accessors and public fields, but not constants (constant fields, enums, methods, etc.). It also includes any code which directly deals with IO, I explain why further below.
My reasoning is as follows:
Everything that's public is basically an entry point to a behavior of the system. A unit test should therefore be written that guarantees that the expected behavior of that entry point works as required. You shouldn't test all possible ways of calling the entry point, only the ones that you explicitly require. Your unit tests are therefore also the specs of what behavior your system supports and your documentation of how to use it.
Things that are not public can basically be deleted/re-factored at will with no impact to the behavior of the system. If you were to test those, you'd create a hard dependency from your unit test to that code, which would prevent you from doing refactoring on it. That's why you should not test anything else but public methods, fields and accessors.
Constants by design are not behavior, but axioms. A unit test that verifies a constant is itself a constant, so it would only be duplicated code and useless effort to write a test for constants.
So to answer your specific example:
should I create an unit test for a string formatting that's supossed
to be user-input?
Yes, absolutely. All methods which receive or send external input/output (which can be summed up as receiving IO), should be unit tested. This is probably the only case where I'd say non-public things that receive IO should also be unit tested. That's because I consider IO to be a public entry. Anything that's an entry point to an external actor I consider public.
So unit test public methods, public fields, public accessors, even when those are static constructs and also unit test anything which receives or sends data from an external actor, be it a user, a database, a protocol, etc.
NOTE: You can write temporary unit tests on non public things as a way for you to help make sure your implementation works. This is more of a way to help you figure out how to implement it properly, and to make sure your implementation works as you intend. After you've tested that it works though, you should delete the unit test or disable it from your test suite.
Kent Beck, in Extreme Programming Explained, said you only need to test the things that need to work in production.
That's a brusque way of encapsulating both test-driven development, where every change in production code is supported by a test that fails when the change is not present; and You Ain't Gonna Need It, which says there's no value in creating general-purpose classes for applications that only deal with a couple of specific cases.
I think you have to change your point of view.
In a pure form TDD requires the red-green-refactor workflow:
write test (it must fail) RED
write code to satisfy test GREEN
refactor your code
So the question "What I have to test?" has a response like: "You have to write a test that correspond to a feature or a particular requirements".
In this way you get must code coverage and also a better code design (remember that TDD stands also for Test Driven "Design").
Generally speaking you have to test ALL public method/interfaces.
should I create an unit test for a string formatting that's supossed
to be user-input? Or is it just wasting my time while I just can check
it in the actual code?
Not sure I understand what you mean, but the tests you write in TDD are supposed to test your production code. They aren't tests that check user input.
To put it another way, there can be TDD unit tests that test the user input validation code, but there can't be TDD unit tests that validate the user input itself.

Goal of unit testing and TDD: find/minimize bugs or improve design?

I'm fairly green to unit testing and TDD, so please bear with me as I ask what some may consider newbie questions, or if this has been debated before. If this turns out to be considered a "bad question" (too subjective and open for debate), I will happily close it. However, I've searched for a couple days, and am not getting a definitive answer, and I need a better understand of this, so I know no better way to get more info than to post here.
I've started reading an older book on unit testing (because a colleague had it on hand), and its opening chapter talks about why to unit test. One of the points it makes is that in the long run, your code is much more reliable and cleaner, and less prone to bugs. It also points out that effective unit testing will make tracking and fixing bugs much easier. So it seems to focus quite a bit on the overall prevention/reduction of bugs in your code.
On the other hand, I also found an article about writing great unit tests, and it states that the goal of unit testing is to make your design more robust, and conversely, finding bugs is the goal of manual testing, not unit testing.
So being the newbie to TDD that I am, I'm a little confused as to the state of mind with which I should go into TDD and building my unit tests. I'll admit that part of the reason I'm taking this on now with my recently started project is because I'm tired of my changes breaking previously existing code. And admittedly, the linked article above does at least point this out as an advantage to TDD. But my hope is that by going back in and adding unit tests to my existing code (and then continuing TDD from this point forward) is to help prevent these bugs in the first place.
Are this book and this article really saying the same thing in different tones, or is there some subjectivity on this subject, and what I'm seeing is just two people having somewhat different views on how to approach TDD?
Thanks in advance.
Unit tests and automated tests generally are for both better design and verified code.
Unit test should test some execution path in some very small unit. This unit is usually public method or internal method exposed on your object. The method itself can still use many other protected or private methods from the same object instance. You can have single method and several unit test for this method to test different execution paths. (By execution path I meant something controlled by if, switch, etc.) Writing unit tests this way will validate that your code really does what you expect. This can be especially important in some corner cases where you expect to throw exception in some rare scenarios etc. You can also test how method behaves if you pass different parameters - for example null instead of object instance, negative value for integer used for indexing, etc. That is especially useful for public API.
Now suppose that your tested method also uses instances of other classes. How to deal with it? Should you still test your single method and believe that class works? What if the class is not implemented yet? What if the class has some complex logic inside? Should you test these execution paths as well on your current method? There are two approaches to deal with this:
For some cases you will simply let the real class instance to be tested together with your method. This is for example very common in case of logging (it is not bad to have logs available for test as well).
For other scenarios you would like to take this dependencies from your method but how to do it? The solution is dependency injection and implementing against abstraction instead of implementation. What does it mean? It means that your method / class will not create instances of these dependencies but instead it will get them either through method parameters, class constructor or class properties. It also means that you will not expect concrete implementation but either abstract base class or interface. This will allow you to pass fake, dummy or mock implementation to your tested object. These special type of implementations simply don't do any processing they get some data and return expected result. This will allow you to test your method without dependencies and lead to much better and more extensible design.
What is the disadvantage? Once you start using fakes / mocks you are testing single method / class but you don't have a test which will grab all real implementations and put them together to test if the whole system really works = You can have thousands of unit tests and validate that each your method works but it doesn't mean they will work together. This is scenario for more complex tests - integration or end-to-end tests.
Unit tests should be usually very easy to write - if they are not it means that your design is probably complicated and you should think about refactoring. They should be also very fast to execute so you can run them very often. Other kinds of test can be more complex and very slow and they should run mostly on build server.
How it fits with SW development process? The worst part of development process is stabilization and bug fixing because this part can be very hardly estimated. To be able to estimate how much time bug fixing takes you must know what causes the bug. But this investigation cannot be estimated. You can have bug which will take one hour to fix but you will spend two weeks by debugging your application and searching for this bug. When using good code coverage you will most probably find such bug early during development.
Automated testing don't say that SW doesn't contain bugs. It only say that you did your best to find and solve them during development and because of that your stabilization could be much less painful and much shorter. It also doesn't say that your SW does what it should - that is more about application logic itself which must be tested by some separate tests going through each use case / user story - acceptance tests (they can be also automated).
How this fit with TDD? TDD takes it to extreme because in TDD you will write your test first to drive your quality, code coverage and design.
It's a false choice. "Find/minimize bugs" OR improve design.
TDD, in particular (and as opposed to "just" unit testing) is all about giving you better design.
And when your design is better, what are the consequences?
Your code is easier to read
Your code is easier to understand
Your code is easier to test
Your code is easier to reuse
Your code is easier to debug
Your code has fewer bugs in the first place
With well-designed code, you spend less time finding and fixing bugs, and more time adding features and polish. So TDD gives you a savings on bugs and bug-hunting, by giving you better design. These things are not separate; they are dependent and interrelated.
There can many different reasons why you might want to test your code. Personally, I test for a number of reasons:
I usually design API using a combination of the normal design patterns (top-down) and test-driven development (TDD; bottom-up) to ensure that I have a sound API both from a best practices point-of-view as well as from an actual usage point-of-view. The focus of the tests is both on the major use-cases for the API, but also on the completeness of the API and the behavior - so they are primary "black box" tests. The development sequence is often:
main API based on design patterns and "gut feeling"
TDD tests for the major use-cases according to the high-level specification for the API - primary in order to make sure the API is "natural" and easy to use
fleshed out API and behavior
all the needed test cases to ensure the completeness and correct behavior
Whenever I fix an error in my code, I try to write a test to make sure it stay fixed. Somehow, the error got into my original design and passed my original testing of the code, so it is probably not all that trivial. I have noticed that many of the tests tests are "write box" tests.
In order to be able to make any sort of major re-factoring of the code, you need an extensive set of API tests to make sure the behavior of the code stays the same after the re-factoring. For any non-trivial API, I want the test suite to be in place and working for a long time before the re-factoring to be sure that all the major use-cases are covered in a good way. As often as not, you are forced to throw away most of your "white box" tests as they - by the very definition - makes too many assumptions about the internals. I usually try to "translate" as many as possible of these tests as the same non-trivial problems tend to survive re-factoring of the code.
In order to transfer any code between developers, I usually also want a good test suite with focus on the API and the major use-cases. So basically the tests from the initial TDD...
I think that answer to your question is: both.
You will improve design because there is one particular thing about TDD that is great: while you write tests you put yourself in the position of the client code that will be using the system under test - and this alone makes you think about certain design choices.
For example: UI. When you start writing the tests, you will see that those God-Forms are impossible to test, so you separate the logic behind the screens to a presenter/controller, and you get MVP/MVC/whatever.
Having the concept of unit testing a class and mocking dependencies brings you to Single Responsibility Principle. There is a point about every of SOLID principles.
As for bugs, well, if you unit test every method of every class you write (except properties, very simple methods and such) you will catch most bugs in the start. Write the integration tests, you cover almost all of them.
I'll take my stab at this using a remix of a previous answer I wrote. In short, I don't see this as a dichotomy between driving good design and minimizing bugs. I see it more as one (good design) leading to the other (minimizing bugs).
I tend towards saying TDD is a design process that happens to involve unit testing. It's a design process because within each Red-Green-Refactor iteration, you write the test first for code that doesn't exist. You're designing as you're going.
The first beauty of TDD is that the design of your code is guaranteed to be testable. Testable code tends to have loose coupling and high cohesion. Loose coupling and high cohesion are important because they make the code easy to change when requirements change. The second beauty of TDD is that after you're done implementing your system, you happen to have a huge regression suite to catch any bugs and changes in assumptions. Thus, TDD makes your code easy to change because of the design it creates and it makes your code safe to change because of the test harness it creates.
Trying to retrospectively add Unit tests can be quite painful and expensive. If the code doesn't support Unit test you may be better looking at integration tests to test your code.
Don't mix Unit Testing with TDD.
Unit Testing is just the fact of "testing" your code to ensure quality and maintainability.
TDD is a full blown development methodology in which you first write your tests (based on requirements), and only then you write the needed code (and just the needed code) to make that test pass. This means that you only write code to repair a broken test.
Once done that, you write another test, and the code needed to make it pass. In the way, you may be forced to do "refactoring" of the code to allow a new test run without braking another. This way, the "design" arises from the tests.
The purpose of this methodology is of course reduce bugs and improve design, but the main goal of it is to improve productivity because you write exactly the code you need. And you don't write documentation: the tests are the documentation. If a requirement changes, then you change the tests and the code afterwards. If new requirements appear, just add new tests.

Prove correctness of unit test

I'm creating a graph framework for learning purposes. I'm using a TDD approach, so I'm writing a lot of unit tests. However, I'm still figuring out how to prove the correctness of my unit tests
For example, I have this class (not including the implementation, and I have simplified it)
public class SimpleGraph(){
//Returns true on success
public boolean addEdge(Vertex v1, Vertex v2) { ... }
//Returns true on sucess
public boolean addVertex(Vertex v1) { ... }
}
I also have created this unit tests
#Test
public void SimpleGraph_addVertex_noSelfLoopsAllowed(){
SimpleGraph g = new SimpleGraph();
Vertex v1 = new Vertex('Vertex 1');
actual = g.addVertex(v1);
boolean expected = false;
boolean actual = g.addEdge(v1,v1);
Assert.assertEquals(expected,actual);
}
Okay, awesome it works. There is only one crux here, I have proved that the functions work for this case only. However, in my graph theory courses, all I'm doing is proving theorems mathematically (induction, contradiction etc. etc.).
So I was wondering is there a way I can prove my unit tests mathematically for correctness? So is there a good practice for this. So we're testing the unit for correctness, instead of testing it for one certain outcome.
No. Unit tests don't attempt to prove correctness in the general case. They should test specific examples. The idea is to pick enough representative examples that if there is an error it will probably be found by one or more of the tests, but you can't be sure to catch all errors this way. For example if you were unit testing an add function you might test some positive numbers, some negative, some large numbers and some small, but using this approach alone you'd be lucky to find the case where this implementation doesn't work:
int add(int a, int b) {
if (a == 1234567 && b == 2461357) { return 42; }
return a + b;
}
You would however be able to spot this error by combining unit testing and code coverage. However even with 100% code coverage there can be logical errors which didn't get caught by any tests.
It is possible to prove code for correctness. It is called formal verification, but it's not what unit tests are for. It's also expensive to do for all but the most simple software so it is rarely done in practice.
Probably not. Unit tests approach the problem by exhaustive testing:
You verify that your test works by writing the test before implementing the behavior.
Then you see that the test fails.
Then you implement the behavior to pass that test, and only that test. Never write code that is not needed to implement a test.
Really, what you're proving is that one case of your algorithm is working, eg you're proving that a subset of your execution paths are valid. Testing will never help you prove correctness in the strict mathematical sense (except for very simple cases). In the general case, this is impossible. Testing is a pragmatic approach to this problem where we try to show representative cases are correct (boundary values, values somewhere in the middle, etc.) and hope that that works.
Still, some tools such as findbugs etc. manage to give you conservative proof of some properties of your code.
If you would like formal proof of your stuff, there's always Coq, Agda and similar languages, but that's a hell of a stretch from writing a unit test :)
One great, simple introduction to testing vs proofs is Abstract Interpretation in a Nutshell Patrick Cousot.
There are tool for formally specifying how your code operates and even tools to proof that they work in that way, but they are far away from unit testing area.
Two examples from the Java world are JML and ESC/Java2
NASA has a whole department dedicated to formal methods.
My 2 cents. Look at it this way: you think you wrote a function that does something, but what you really did was writing a function that you think it does something. If you cannot write a mathematically proof of what the code does, you can as well treat the function as a hypothesis; you cannot be sure it will be always correct, but at least it is falsiable.
And that's why we write unit testing (note: just other functions, prone to have bugs, sigh), to try to falsify the hypothesis finding counter-examples with which it does not hold.
If you want to go for correctness properties of your code, you can, as already mentioned in previous posts, apply some formal verification tools. This is not an easy thing to do, but may still be doable. There are tools like the KeY system capable of proving first-order properties for Java code. KeY has some problems with things like generics, floats and parallelism, but works quite well for most concepts of the Java language. Moreover, you can automatically create test cases with KeY based on the proof tree.
If you are familiar with JML (this is not hard to learn, basically Java with a bit of logic), you could try out this approach. For really critical parts of your systems, verification might really be something to think about; for other parts of the code, testing some of the possible traces with unit testing might already be sufficient, for example to avoid regression problems.

What to test when writing Unit Tests?

I want to begin unit testing our application, because I believe that this is the first step to developing a good relationship with testing and will allow me to branch into other forms of testing, most interesting BDD with Cucumber.
We currently generate all of our Base classes using Codesmith which are based entirely on the tables in a database. I am curious as to the benefits of generating test cases with these Base classes? Is this poor testing practices?
This leads me to the ultimate question of my post. What do we test when using Unit Tests?
Do we test the examples we know we want out? or do we test the examples we do not want?
Their can be methods that have multiple ways of Failing and multiple ways of Success, how do we know when to stop?
Take a Summing function for example. Give it 1,2 and expect 3 in the only unit test.. how do we know that 5,6 isn't coming back 35?
Question Recap
Generating unit tests (Good/Bad)
What/How much do we test?
Start with your requirements and write tests that test the expected behavior. From that point on, how many other scenarios you test can be driven by your schedule, or maybe by your recognizing non-success scenarios that are particularly high-risk.
You might consider writing non-success tests only in response to defects you (or your users) discover (the idea being that you write a test that tests the defect fix before you actually fix the defect, so that your test will fail if that defect is re-introduced into your code in future development).
The point of unit tests is to give you confidence (but only in special cases does it give you certainty) that the actual behavior of your public methods matches the expected behavior. Thus, if you have a class Adder
class Adder { public int Add(int x, int y) { return x + y; } }
and a corresponding unit test
[Test]
public void Add_returns_that_one_plus_two_is_three() {
Adder a = new Adder();
int result = a.Add(1, 2);
Assert.AreEqual(3, result);
}
then this gives you some (but not 100%) confidence that the method under test is behaving appropriately. It also gives you some defense against breaking the code upon refactoring.
What do we test when using Unit Tests?
The actual behavior of your public methods against the expected (or specified) behavior.
Do we test the examples we know we want out?
Yes, one way to gain confidence in the correctness of your method is to take some input with known expected output, execute the public method on the input and compare the acutal output to the expected output.
What to test: Everything that has ever gone wrong.
When you find a bug, write a test for the buggy behavior before you fix the code. Then, when the code is working correctly, the test will pass, and you'll have another test in your arsenal.
1) To start, i'd recommend you to test your app's core logic.
2) Then, use code coverage tool in vs to see whether all of your code is used in tests(all branches of if-else, case conditions are invoked).
This is some sort of an answer to your question about testing 1+2 = 3, 5 + 6 = 35: when code is covered, you can feel safe with further experiments.
3)It's a good practice to cover 80-90% of code: the rest of work is usually unefficient: getters-setters, 1-line exception handling, etc.
4) Learn about separation of concerns.
5) Generation unit tests - try it, you'll see, that you can save a pretty lines of code writing them manually. I prefer generating the file with vs, then write the rest TestMethods by myself.
You unittest things where you
want to make sure your algorithm works
want to safeguard against accidental changes in the future
So in your example it would not make much sense to test the generated classes. Test the generator instead.
It's good practice to test the main use cases (what the tested function was designed for) first. Then you test the main error cases. Then you write tests for corner cases (i.e. lower and upper bounds). The unusual error cases are normally so hard to produce that it doesn't make sense to unit-test them.
If you need to verify a large range of parameter sets, use data-driven testing.
How many things you test is a matter of effort vs. return, so it really depends on the individual project. Normally you try to follow the 80/20 rule, but there may be applications where you need more test coverage because a failure would have very serious consequences.
You can dramatically reduce the time you need to write tests if you use a test-driven approach (TDD). That's because code that isn't written with testability in mind is much harder, sometimes near to impossible to test. But since nothing in life is free, the code developed with TDD tends to be more complex itself.
I'm also beginning the process of more consistently using unit tests and what I've found is that the biggest task in unit testing is structuring my code to support testing. As I start to think about how to write tests, it becomes clear where classes have become overly coupled, to the point that the complexity of the 'unit' makes defining tests difficult. I spend as much or more time refactoring my code as I do writing tests. Once the boundaries between testable units become clearer, the question of where to start testing resolves itself; start with your smallest isolated dependencies (or at least the ones you're worried about) and work your way up.
There are three basic events I test for:
min, max, and somewhere between min and max.
And where appropriate two extremes: below min, and above max.
There are obvious exceptions (some code may not have a min or max for example) but I've found that unit testing for these events is a good start and captures a majority of "common" issues with the code.

How much unit-testing before start coding a method/class?

I´m starting (trying at least) to do coding using TDD principles and I have this question: how much tests do I need to write before actually start coding?
Take for example a hypothetically Math class and a method Divide(int a, int b).
a) Do I have to fully test all methods of Math class (Sum, Average, ...) before start coding Math?
b) Do I have to fully test the Divide method, asserting for example for division by zero, before start coding the method?
c) Or I can create a simple test assertion and verify that it fails, write the code and check that it´s OK, reapeating the process for each of the assertions of a method?
I think the option c) is the correct, but I couldn´t find an answer to it (I did some searchs but couldn´t find a definitive answer).
Your option c represents fully by the book TDD.
You write one failing test exercising a feature of the class that you are working on and then write only enough code to make that test pass. Then you do this again, for the next test.
By doing it this way you should then see each new piece of code you write being very focused upon a particular use-case/test and also find that your tests remain distinct in what they cover.
You want to end up working in a red-green-refactor fashion, so that periodically you go back over both your code and your tests for places where you can refactor things into a better design.
Of course, in the real world you may end up writing many red tests, or writing more code than a particular test requires, or even writing code without tests, but that is moving away from TDD and should only be done with caution.
The wikipedia article on this is actually quite good. http://en.wikipedia.org/wiki/Test-driven_development
The first thing you want to do is write a specification for each method you want to implement. In your specification, you need to address as many corner cases as you care about, and define the behavior your method should result in when executing those cases.
Once your specification is complete, you design tests for every part of your specification ensuring that each test is not passing or failing due to corner case conditions. At this point you are ready to code up your function implementation and tests. Once this is complete, you refine your specification/tests/implementation as necessary until the results are exactly what you desire from your implementation.
Then you document everything (particularly your reasoning for handling corner cases).
Like others have mentioned, your option c would be the pure TDD way to do this. The idea is to build your code up in small red-green-refactor increments. A good simple example of this is Robert Martin's Bowling Kata.
Well You can probably write
#Test
public void testDivide() {
Math math = new Math();
int result = math.divide(20,2);
Assert.assertNotNull(result);
}
That's it, now when you run this your test will fail so you fix your Math.divide method. and add cases in the next step
this is a very ideal way but everyone knows it's not always this way.
The definition of "unit" is not universal, sometimes the unit is the class, sometimes it can be a method. So there is no real universal answer.
In this particular case, I would consider the unit to be a method so wouldn't not all methods before to start coding. Instead, I would do things incrementally, methods after methods. This eliminates a).
However, when writing a test for a method, I would write a rigorous test i.e. I would test the passing and the non passing cases, test at the limits, test the special values, etc. When writing a test, you're defining a contract and this contract should include exceptional situation. Thinking of them from the start do help. And for me, the point of having a green light is to be done so I want my test to be exhaustive. And I think that this is b).
If your test is not exhaustive, then you're actually not done with your unit even if the test is passing and I don't really see the point. I guess that this is c).
So my choice would be b).
If you are going for completeness you should design and code your unit tests prior to development and then develop the primary functionality to the created unit tests. The more thorough you are the clearer the scope and better the final quality. If time and functionality allow, I would create tests for each method/function.