Unit test documentation [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I would like to know from those who document unit tests how they are documenting it. I understand that most TDD followers claim "the code speaks" and thus test documentation is not very important because code should be self-descriptive. Fair enough, but I would like to know how to document unit tests, not whether to document them at all.
My experience as a developer tells me that understanding old code (this includes unit tests) is difficult.
So what is important in a test documentation? When is the test method name not descriptive enough so that documentation is justified?

As requested by Thorsten79, I'll elaborate on my comments as an answer. My original comment was:
"The code speaks" is unfortunately
completely wrong, because a
non-developer cannot read the code,
while he can at least partially read
and understand generated
documentation, and this way he can
know what the tests test. This is
especially important in cases where
the customer fully understands the
domain and just can't read code, and
gets even more important when the unit
tests also test hardware, like in the
embedded world, because then you test
things that can be seen.
When you're doing unit tests, you have to know whether you're writing them just for you (or for your co-workers), or if you're also writing them for other people. Many times, you should be writing code for your readers, rather than for your convenience.
In mixed hardware/software development like in my company, the customers know what they want. If their field device has to do a reset when receiving a certain bus command, there must be a unit test that sends that command and checks whether the device was reset. We're doing this here right now with NUnit as the unit test framework, and some custom software and hardware that makes sending and receiving commands (and even pressing buttons) possible. It's great, because the only alternative would be to do all that manually.
The customer absolutely wants to know which tests are there, and he even wants to run the tests himself. If the tests are not properly documented, he doesn't know what the test does and can't check if all tests he think he'll need are there, and when running the test, he doesn't know what it will do. Because he can't read the code. He knows the used bus system better than our developers, but they just can't read the code. If a test fails, he does not know why and cannot even say what he thinks the test should do. That's not a good thing.
Having documented the unit tests properly, we have
code documentation for the developers
test documentation for the customer, which can be used to prove that the device does what it should do, i.e. what the customer ordered
the ability to generate the documentation in any format, which can even be passed to other involved parties, like the manufacturer
Properly in this context means: Write clear language that can be understood by non-developers. You can stay technical, but don't write things only you can understand. The latter is of course also important for any other comments and any code.
Independent of our exact situation, I think that's what I would want in unit tests all the time, even if they're pure software. A customer can ignore a unit test he doesn't care about, like basic function tests. But just having the docs there does never hurt.
As I've written in a comment to another answer: In addition, the generated documentation is also a good starting point if you (or your boss, or co-worker, or the testing department) wants to examine which tests are there and what they do, because you can browse it without digging through the code.

In the test code itself:
With method level comments explaining
what the test is testing / covering.
At the class level, a comment indicating the actual class being tested (which could actually be inferred from the test class name so that's actually less important than the comments at the method level).
With test coverage reports
Such as Cobertura. That's also documentation, since it indicates what your tests are covering and what they're not.

Comment complex tests or scenarios if required but favour readable tests in the first place.
On the other hand, I try and make my tests speak for themselves. In other words:
[Test]
public void person_should_say_hello() {
// Arrange.
var person = new Person();
// Act.
string result = person.SayHello();
// Assert.
Assert.AreEqual("Hello", result, "Person did not say hello");
}
If I was to look at this test I'd see it used Person (though it would be in PersonTest.cs as a clue ;)) then that if anything breaks it will occur in the SayHello method. The assert message is useful as well, not only for reading tests but when tests are run it's easier to see them in GUI's.
Following the AAA style of Arrange, Act and Assert makes the test essentially document itself. If this test was more complex, you could add comments above the test function explaining what's going on. As always, you should ensure these are kept up to date.
As a side note, using underscore notation for test names makes them much more readably, compare this to:
public void PersonShouldSayHello()
Which for long method names, can make reading the test more difficult. Though this point is often subjective.

When I come back at an old test and don't understand it right away
I refactor if possible
or write that comment that would have made me understand it right away
When you are writing your testcases it is the same as when you are writing your code, everyhting is crystal clear to you. That makes it difficult to envision what you should write to make the code clearer.
Note that this does not mean I never write any comments. There still are plenty of situations when I just know that I will going to have a hard time figuring out what a particular piece of code does.
I usually start with point 1 in these situations...

Improving the unit tests as executable specification is the point of Behaviour-Driven Development : BDD is an evolution of TDD where unit-tests use an Ubiquitous Language (a language based on the business domain and shared by the developers and the stakeholders) and expressive names (testCannotCreateDuplicateEntry) to describe what the code is supposed to do. Some BDD frameworks pushed the idea very far, and show executable written with almost natural language, for example.

I would advice against any detailed documentation separate from code. Why? Because whenever you need it, it will most likely be very outdated. The best place for detailed documentation is the code itself (including comments). BTW, anything you need to say about a specific unit test is very detailed documentation.
A few pointers on how to achieve well self-documented tests:
Follow a standard way to write all tests, like AAA pattern. Use a blank line to separate each part. That makes it much easier for the reader to identify the important bits.
You should include, in every test name: what is being tested, the situation under test and the expected behavior. For example: test__getAccountBalance__NullAccount__raisesNullArgumentException()
Extract out common logic into set up/teardown or helper methods with descriptive names.
Whenever possible use samples from real data for input values. This is much more informative than blank objects or made up JSON.
Use variables with descriptive names.
Think about your future you/teammate, if you remembered nothing about this, would you like any additional information when the test fails? Write that down as comments.
And to complement what other answers have said:
It's great if your customer/Product Owner/boss has a very good idea as to what should be tested and is eager to help, but unit tests are not the best place to do it. You should use acceptance tests for this.
Unit tests should cover specific units of code (methods/functions within classes/modules), if you cover more ground, they will quickly turn into integration tests, which are fine and needed too, but if you do not separate them specifically, people will just get them confused and you will loose some of the benefits of unit testing. For example, when a unit test fails you should get instant bug detection (specially if you follow the naming convention above). When an integration test fails, you know there is a problem, and you know some of its effects, but you might need to debug, sometimes for a long time, to find what it is.
You can use unit testing frameworks for integration tests if you want, but you should know you are not doing unit testing, and you should keep them in separate files/directories.
There are good acceptance/behavior testing frameworks (FitNesse, Robot, Selenium, Cucumber, etc.) that can help business/domain people not just read, but also write the tests themselves. Sure, they will need help from coders to get them to work (specially when starting out), but they will be able to do it, and they do not need to know anything about your modules or classes of functions.

Related

Unit Testing : what to test / what not to test?

Since a few days ago I've started to feel interested in Unit Testing and TDD in C# and VS2010. I've read blog posts, watched youtube tutorials, and plenty more stuff that explains why TDD and Unit Testing are so good for your code, and how to do it.
But the biggest problem I find is, that I don't know what to check in my tests and what not to check.
I understand that I should check all the logical operations, problems with references and dependencies, but for example, should I create an unit test for a string formatting that's supossed to be user-input? Or is it just wasting my time while I just can check it in the actual code?
Is there any guide to clarify this problem?
In TDD every line of code must be justified by a failing test-case written before the code.
This means that you cannot develop any code without a test-case. If you have a line of code (condition, branch, assignment, expression, constant, etc.) that can be modified or deleted without causing any test to fail, it means this line of code is useless and should be deleted (or you have a missing test to support its existence).
That is a bit extreme, but this is how TDD works. That being said if you have a piece of code and you are wondering whether it should be tested or not, you are not doing TDD correctly. But if you have a string formatting routine or variable incrementation or whatever small piece of code out there, there must be a test case supporting it.
UPDATE (use-case suggested by Ed.):
Like for example, adding an object to a list and creating a test to see if it is really inside or there is a duplicate when the list shouldn't allow them.
Here is a counterexample, you would be surprised how hard it is to spot copy-paste errors and how common they are:
private Set<String> inclusions = new HashSet<String>();
private Set<String> exclusions = new HashSet<String>();
public void include(String item) {
inclusions.add(item);
}
public void exclude(String item) {
inclusions.add(item);
}
On the other hand testing include() and exclude() methods alone is an overkill because they do not represent any use-cases by themselves. However, they are probably part of some business use-case, you should test instead.
Obviously you shouldn't test whether x in x = 7 is really 7 after assignment. Also testing generated getters/setters is an overkill. But it is the easiest code that often breaks. All too often due to copy&paste errors or typos (especially in dynamic languages).
See also:
Mutation testing
Your first few TDD projects are going to probably result in worse design/redesign and take longer to complete as you are learning (at least in my experience). This is why you shouldn't jump into using TDD on a large critical project.
My advice is to use "pure" TDD (acceptance/unit test everything test-first) on a few small projects (100-10,000 LOC). Either do the side projects on your own or if you don't code in your free time, use TDD on small internal utility programs for your job.
After you do "pure" TDD on about 6-12 projects, you will start to understand how TDD affects design and learn how to design for testability. Once you know how to design for testability, you will need to TDD less and maximize the ROI of unit, regression, acceptance, etc. tests rather than test everything up front.
For me, TDD is more of teaching method for good code design than a practical methodology. However, I still TDD logic code and unit test instead of debug.
There is no simple answer to this question. There is the law of diminishing returns in action, so achieving perfect coverage is seldom worth it. Knowing what to test is a thing of experience, not rules. It’s best to consciously evaluate the process as you go. Did something break? Was it feasible to test? If not, is it possible to rewrite the code to make it more testable? Is it worth it to always test for such cases in the future?
If you split your code into models, views and controllers, you’ll find that most of the critical code is in the models, and those should be fairly testable. (That’s one of the main points of MVC.) If a piece of code is critical, I test it, even if it means that I would have to rewrite it to make it more testable. If a piece of code is easy to get wrong or get broken by future updates, it gets a test. I seldom test controllers and views, as it’s not proving worth the trouble for me.
The way I see it all of your code falls into one of three buckets:
Code that is easy to test: This includes your own deterministic public methods.
Code that is difficult to test: This includes GUI, non-deterministic methods, private methods, and methods with complex setup.
Code that you don't want to test: This includes 3rd party code, and code that is difficult to test and not worth the effort.
Of the three, you should focus on testing the easy code. The difficult to test code should be refactored so that into two parts: code that you don't want to test and easy code. And of course, you should test the refactored easy code.
I think you should only unit test entry points to behavior of the system. This include public methods, public accessors and public fields, but not constants (constant fields, enums, methods, etc.). It also includes any code which directly deals with IO, I explain why further below.
My reasoning is as follows:
Everything that's public is basically an entry point to a behavior of the system. A unit test should therefore be written that guarantees that the expected behavior of that entry point works as required. You shouldn't test all possible ways of calling the entry point, only the ones that you explicitly require. Your unit tests are therefore also the specs of what behavior your system supports and your documentation of how to use it.
Things that are not public can basically be deleted/re-factored at will with no impact to the behavior of the system. If you were to test those, you'd create a hard dependency from your unit test to that code, which would prevent you from doing refactoring on it. That's why you should not test anything else but public methods, fields and accessors.
Constants by design are not behavior, but axioms. A unit test that verifies a constant is itself a constant, so it would only be duplicated code and useless effort to write a test for constants.
So to answer your specific example:
should I create an unit test for a string formatting that's supossed
to be user-input?
Yes, absolutely. All methods which receive or send external input/output (which can be summed up as receiving IO), should be unit tested. This is probably the only case where I'd say non-public things that receive IO should also be unit tested. That's because I consider IO to be a public entry. Anything that's an entry point to an external actor I consider public.
So unit test public methods, public fields, public accessors, even when those are static constructs and also unit test anything which receives or sends data from an external actor, be it a user, a database, a protocol, etc.
NOTE: You can write temporary unit tests on non public things as a way for you to help make sure your implementation works. This is more of a way to help you figure out how to implement it properly, and to make sure your implementation works as you intend. After you've tested that it works though, you should delete the unit test or disable it from your test suite.
Kent Beck, in Extreme Programming Explained, said you only need to test the things that need to work in production.
That's a brusque way of encapsulating both test-driven development, where every change in production code is supported by a test that fails when the change is not present; and You Ain't Gonna Need It, which says there's no value in creating general-purpose classes for applications that only deal with a couple of specific cases.
I think you have to change your point of view.
In a pure form TDD requires the red-green-refactor workflow:
write test (it must fail) RED
write code to satisfy test GREEN
refactor your code
So the question "What I have to test?" has a response like: "You have to write a test that correspond to a feature or a particular requirements".
In this way you get must code coverage and also a better code design (remember that TDD stands also for Test Driven "Design").
Generally speaking you have to test ALL public method/interfaces.
should I create an unit test for a string formatting that's supossed
to be user-input? Or is it just wasting my time while I just can check
it in the actual code?
Not sure I understand what you mean, but the tests you write in TDD are supposed to test your production code. They aren't tests that check user input.
To put it another way, there can be TDD unit tests that test the user input validation code, but there can't be TDD unit tests that validate the user input itself.

How do i really unit test code?

I was reading the Joel Test 2010 and it reminded me of an issue i had with unit testing.
How do i really unit test something? I dont unit test functions? only full classes? What if i have 15 classes that are <20lines. Should i write a 35line unit test for each class bringing 15*20 lines to 15*(20+35) lines (that's from 300 to 825, nearly 3x more code).
If a class is used by only two other classes in the module, should i unit test it or would the test against the other two classes suffice? what if they are all < 30lines of code should i bother?
If i write code to dump data and i never need to read it such as another app is used. The other app isnt command line or it is but no way to verify if the data is good. Do i still need to unit test it?
What if the app is a utility and the total is <500lines of code. Or is used that week and will be used in the future but always need to be reconfiguration because it is meant for a quick batch process and each project will require tweaks because the desire output is unchanged. (i'm trying to say theres no way around it, for valid reasons it will always be tweaked) do i unit test it and if so how? (maybe we dont care if we break a feature used in the past but not in the present or future).
etc.
I think this should be a wiki. Maybe people would like to say an exactly of what they should unit test (or should not)? maybe links to books are good. I tried one but it never clarified what should be unit tested, just the problems of writing unit testing and solutions.
Also if classes are meant to only be in that project (by design, spec or whatever other reason) and the class isnt useful alone (lets say it generates the html using data that returns html ready comments) do i really need to test it? say by checking if all public functions allow null comment objects when my project doesnt ever use null comment. Its those kind of things that make me wonder if i am unit testing the wrong code. Also tons of classes are throwaway when the project. Its the borderline throwaway or not very useful alone code which bothers me.
Here's what I'm hearing, whether you meant it this way or not: a whole litany of issues and excuses why unit testing might not be applicable to your code. In other words: "I don't see what I'll be getting out of unit tests, and they're a lot of bother to write; maybe they're not for me?"
You know what? You may be right. Unit tests are not a panacea. There are huge, wide swaths of testing that unit testing can't cover.
I think, though, that you're misestimating the cost of maintenance, and what things can break in your code. So here are my thoughts:
Should I test small classes? Yes, if there are things in that class that can possibly break.
Should I test functions? Yes, if there are things in this function that can possibly break. Why wouldn't you? Or is your concern over whether it's considered a unit or not? That's just quibbling over names, and shouldn't have any bearing on whether you should write unit tests for it! But it's common in my experience to see a method or function described as a unit under test.
Should I unit test a class if it's used by two other classes? Yes, if there's anything that can possibly break in that class. Should I test it separately? The advantage of doing so is to be able to isolate breakages straight down to the shared class, instead of hunting through the using classes to see if it was they that broke or one of their dependencies.
Should I test data output from my class if another program will read it? Hell yes, especially if that other program is a 3rd-party one! This is a great application of unit tests (or perhaps system tests, depending on the isolation involved in the test): to prove to yourself that the data you output is precisely what you think you should have output. I think you'll find that has the power to simplify support calls immeasurably. (Though please note it's not a substitute for good acceptance testing on that customer's end.)
Should I test throwaway code? Possibly. Will pursuing a TDD strategy get your throwaway code out the door faster? It might. Will having solid unit-tested chunks that you can adapt to new constraints reduce the need to throw code away? Perhaps.
Should I test code that's constantly changing? Yes. Just make sure all applicable tests are brought up to date and pass! Constantly changing code can be particularly susceptible to errors, after all, and enabling safe change is another of unit testing's great benefits. Plus, it probably puts a burden on your invariant code to be as robust as possible, to enable this velocity of change. And you know how you can convince yourself whether a piece of code is robust...
Should I test features that are no longer needed? No, you can remove the test, and probably the code as well (testing to ensure you didn't break anything in the process, of course!). Don't leave unit test rot around, especially if the test no longer works or runs, or people in your org will move away from unit tests and you'll lose the benefit. I've seen this happen. It's not pretty.
Should I test code that doesn't get used by my project, even if it was written in the context of my project? Depends on what the deliverable of your project is, and what the priorities of your project are. But are you sure nobody outside of your project will use it? If they won't, and you aren't, perhaps it's just dead code, in which case see above. From my point of view, I wouldn't feel I'd done a complete job with a class if my testing didn't cover all its important functionality, whether the project used all that functionality or not. I like classes that feel complete, but I keep an eye towards not overengineering a bunch of stuff I don't need. If I put something in a class, then, I intend for it to be used, and will therefore want to make sure it works. It's an issue of personal quality and satisfaction to me.
Don't get fixated on counting lines of code. Write as much test code as you need to convince yourself that every key piece of functionality is being thoroughly tested. As an extreme example, the SQLite project has a tests:source-code ratio of more than 600:1. I use the term "extreme" in a good sense here; the ludicrous amount of testing that goes on is possibly the predominant reason that SQLite has taken over the world.
How can you do all those calculations? Ideally you should never be in a situation where you could count the lines of your completed class and then start writting the unit test from scratch. Those 2 types of code (real code and test code) should be developed and evolved together, and the only LOC metric that should really worry you in the end is 0 LOCs for test code.
Relative LOC counts for code and tests are pointless. What matters more is test coverage. What matters most is finding the bugs.
When I'm writing unit tests, I tend to focus my efforts on testing complicated code that is more likely to contain bugs. Simple stuff (e.g. simple getter and setter methods) is unlikely to contain bugs, and can be tested indirectly by higher-level unit tests.
Some Time ago, i had The same question you have posted in mind. I studied a lot of articles, Tutorials, books and so on... Although These resources give me a good starting point, i still was insecure about how To apply efficiently Unit Testing code. After coming across xUnit Test Patterns: Refactoring Test Code and put it in my shelf for about one year (You know, we have a lot of stuffs To study), it gives me what i need To apply efficiently Unit Testing code. With a lot of useful patterns (and advices), you will see how you can become an Unit Testing coder. Topics as
Test strategy patterns
Basic patterns
Fixture setup patterns
Result verification patterns
Test double patterns
Test organization patterns
Database patterns
Value patterns
And so on...
I will show you, for instance, derived value pattern
A derived input is often employed when we need to test a method that takes a complex object as an argument. For example, thorough input validation testing requires we exercise the method with each of the attributes of the object set to one or more possible invalid values. Because The first rejected value could cause Termination of The method, we must verify each bad attribute in a separate call. We can instantiate The invalid object easily by first creating a valid object and then replacing one of its attributes with a invalid value.
A Test organization pattern which is related To your question (Testcase class per feature)
As The number of Test methods grows, we need To decide on which Testcase class To put each Test method... Using a Testcase class per feature gives us a systematic way To break up a large Testcase class into several smaller ones without having To change out Test methods.
But before reading
(source: xunitpatterns.com)
My advice: read carefully
You seem to be concerned that there could be more test-code than the code-under-test.
I think the ratios could we be higher than you say. I would expect any serious test to exercise a wide range of inputs. So your 20 line class might well have 200 lines of test code.
I do not see that as a problem. The interesting thing for me is that writing tests doesn't seem to slow me down. Rather it makes me focus on the code as I write it.
So, yes test everything. Try not to think of testing as a chore.
I am part of a team that have just started adding test code to our existing, and rather old, code base.
I use 'test' here because I feel that it can be very vague as to weather it is a unit test, or a system test, or an integration test, or whatever. The differences between the terms have large grey areas, and don't add a lot of value.
Because we live in the real world, we don't have time to add test code for all of the existing functionality. We still have Dave the test guy, who finds most bugs. Instead, as we develop we write tests. You know how you run your code before you tell your boss that it works? Well, use a unit framework (we use Junit) to do those runs. And just keep them all, rather than deleting them. Whatever you normally do to convince yourself that it works. Do that.
If it is easy to write the code, do it. If not, leave it to Dave until you think of a good way to do automate it, or until you get that spare time between projects where 'they' are trying to decide what to put into the next release.
for java u can use junit
JUnit
JUnit is a simple framework to write repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks.
* Getting Started
* Documentation
* JUnit related sites/projects
* Mailing Lists
* Get Involved
Getting Started
To get started with unit testing and JUnit read the article: JUnit Cookbook.
This article describes basic test writing using JUnit 4.
You find additional samples in the org.junit.samples package:
* SimpleTest.java - some simple test cases
* VectorTest.java - test cases for java.util.Vector
JUnit 4.x only comes with a textual TestRunner. For graphical feedback, most major IDE's support JUnit 4. If necessary, you can run JUnit 4 tests in a JUnit 3 environment by adding the following method to each test class:
public static Test suite() {
return new JUnit4TestAdapter(ThisClass.class);
}
Documentation
JUnit Cookbook
A cookbook for implementing tests with JUnit.
Javadoc
API documentation generated with javadoc.
Frequently asked questions
Some frequently asked questions about using JUnit.
Release notes
Latest JUnit release notes
License
The terms of the common public license used for JUnit.
The following documents still describe JUnit 3.8.
The JUnit 3.8 version of this homepage
Test Infected - Programmers Love Writing Tests
An article demonstrating the development process with JUnit.
JUnit - A cooks tour
JUnit Related Projects/Sites
* junit.org - a site for software developers using JUnit. It provides instructions for how to integrate JUnit with development tools like JBuilder and VisualAge/Java. As well as articles about and extensions to JUnit.
* XProgramming.com - various implementations of the xUnit testing framework architecture.
Mailing Lists
There are three junit mailing lists:
* JUnit announce: junit-announce#lists.sourceforge.net Archives/Subscribe/Unsubscribe
* JUnit users list: junit#yahoogroups.com Archives/Subscribe/Unsubscribe
* JUnit developer list: junit-devel#lists.sourceforge.net Archives/Subscribe/Unsubscribe
Get Involved
JUnit celebrates programmers testing their own software. As a result bugs, patches, and feature requests which include JUnit TestCases have a better chance of being addressed than those without.
JUnit source code is now hosted on GitHub.
One possibility is to reduce the 'test code' to a language that describes your tests, and an interpreter to run the tests. Teams I have been a part of have used this to wonderful ends, allowing us to write significantly more tests than the "lines of code" would have indicated.
This allowed our tests to be written much more quickly and greatly increased the test legibility.
I am going to answer what I believe are the main points of your question. First, how much test-code should you write? Well, Test-Driven Development can be of some help here. I do not use it as strictly as it is proposed in theory, but I find that writing a test first often helps me to understand the problem I want to solve much better. Also, it will usually lead to good test-coverage.
Secondly, which classes should you test? Again, TDD (or more precisely some of the principles behind it) can be of help. If you develop your system top down and write your tests first, you will have tests for the outer class when writing the inner class. These tests should fail if the inner class has bugs.
TDD is also tightly coupled with the idea of Design for Testability.
My answer is not intended to solve all your problems, but to give you some ideas.
I think it's impossible to write a comprehensive guide of exactly what you should and shouldn't unit test. There are simply too many permutations and types of objects, classes, and functions, to be able to cover them all.
I suggest applying personal responsibility to the testing, and determining the answer yourself. It's your code, and you're responsible for it working. If it breaks, you have to pay the consequences of fixing the code, repairing the data, taking responsibility for the lost revenue, and apologizing to the people whose application broke while they were trying to use it. Bottom line - your code should never break. So what do you have to do to ensure this?
Sometimes unit testing can work well to help you test out all of the specific methods in a library. Sometimes unit testing is just busy-work, because you can tell the code is working based on your use of the code during higher-level testing. You're the developer, you're responsible for making sure the code never breaks - what do you think is the best way to achieve that?
If you think unit testing is a waste of time in a specific circumstance - it probably is. If you've tested the code in all of the application use-case scenarios and they all work, the code is probably good.
If anything is happening in the code that you don't understand - even if the end result is acceptable - then you need to do some more testing to make sure there's nothing you don't understand.
To me, this seems like common sense.
Unit testing is mostly for testing your units from aspect of functionality. You can test and see if a specific input come, will we receive the expected value or will we throw the right exception?
Unit tests are very useful. I recommend you to write down these tests. However, not everything is required to be tested. For example, you don't need to test simple getters and setters.
If you want to write your unit tests in Java via Eclipse, please look at "How To Write Java Unit Tests". I hope it helps.

Why should I use Test Driven Development? [duplicate]

This question already has answers here:
Closed 13 years ago.
Duplicate:
Why should I practice Test Driven Development and how should I start?
For a developer that doesn't know about Test-Driven Development, what problem(s) will be solved by adopting TDD?
[EDIT] Let's assume that the developer already (ab)uses a unit testing framework.
Here are three reasons that TDD might help a developer/team:
Better understanding of what you're going to write
Enforces the policy of writing tests a little better
Speeds up development
One reason to write the tests first is to have a better understanding of the actual code before you write it. To me, this is the main benefit of test driven development. When you write the test cases first, you think more critically about the corner cases. It's then easier to address them when you write the code and ensure that they're accurate.
Another reason is to actually enforce writing the tests. Often when people do unit-testing without the TDD, they have a testing framework set up, write some new code, and then quit. They think that the code already works just fine, so why write tests? It's simple enough that it won't break, right? But now you've lost the advantages of doing unit-tests in the first place (completely different discussion). Write them first, and they're already there.
Writing these tests first could mean that you don't need to launch the program in a debugging environment (slow — especially for larger projects) to test if a few small things work. Of course there's no excuse for not doing so before committing changes.
Convincing yourself or other people to write the tests first may be difficult. You may have better luck getting them to write both at the same time which may be just as beneficial.
Presumably you test code that you've written before you commit it to a repository.
If that's not true you have other issues to deal with.
If it is true, you can look at writing tests using a framework as a way to automate those main routines or drivers that you currently write so you can run all of them automatically at the push of a button. You don't have to pore over output to decide if the test passed or failed; you embed the success or failure of the test in the code and get a thumbs up or down decision right away. Running all the tests at once reduces the chances of a "whack a mole" situation where you fix something in one class and break something else. All the tests have to pass.
Sounds good so far, yes?
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
If you're already using a test framework, getting good value out of the tests you write, and have meaningful code coverage up around 70%, then I think you're doing well. I'm not sure that TDD will give you much more value. It's up to you to decide whether or not you go that extra mile. Personally, I don't do it. I write tests after the class and refactor if I feel the need. Some people might find it helpful to write the test first knowing it'll fail, but I don't.
(This is more of a comment agreeing with duffymo's answer than an answer of its own.)
duffymo answers:
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
I think it's actually to force coders to think about what their code is doing. Having to think about a test makes one consider what the code is supposed to do: what the pre-conditions and post-conditions are, which functions are primitive and which are composed of primitive functions, what the minimal necessary public interface is, and what's an implementation detail.
These are all things I routinely think about, so like you, "test first" doesn't add a whole lot, for me. And frankly (I know this is heresy in some circles) I like to "anchor" the core ideas of a class by sketching out the public interface first; that way I can look at it, mentally use it, and see if it's as clean as I thought it was. (A class or a library should be easy and intuitive for client programmers to use.)
In other words, I do what TDD tries to ensure happens by writing tests first, but like duffymo, I get there a different way.
And the real point of "test first" is to get a coder to pause and think like a designer. It's silly to make a fetish of how the programmer enters that state; for those who don't do it naturally, "test first" serves as a ritual to get them there. For those who do, "test first" doesn't add much -- and can get in the way of the programmer's habitual way of getting into that state.
Again, we want to look at results, not rituals. If a junior guy needs a ritual, a "stations of the cross" or a rosary* to "get in the groove", "test first" serves that purpose. If someone has their own way to get there, that's great too.
Note that I'm not saying that code shouldn't be tested. It should. It gives us a safety net, which in turn allows us to concentrate our attention on writing good code, even audacious code, because we know the net is there to catch errors.
All I am saying is that fetishistic insistence on "test first" confuses the method (one of many) with the goal, making the programmer think about what he's coding.
* To be ecumenical, I'll note that both Catholics and Muslims use rosaries. And again, it's a mechanical, muscle-memory way to put oneself into a certain frame of mind. It's a fetish (in the original sense of a magic object, not the "sexual fetish" meaning) or good-luck charm. So is saying "Om mani padme hum", or sitting zazen, or stroking a "lucky" rabbit's foot, (Not so lucky for the rabbit.) The philosopher Jerry Fodor, when thinking about hard problems, has a similar ritual: he repeats to himself, "C'mon, Jerry, you can do it!" (I tried that too, but since my name is not Jerry, it didn't work for me. ;) )
Ideally:
You won't waste time writing features you don't need. You'll have a comprehensive unit test suite to serve as a safety net for refactoring. You'll have executable examples of how your code is intended to be used. Your development flow will be smoother and faster; you'll spend less time in the debugger.
But most of all, your design will be better. Your code will be better factored - loosely coupled, highly cohesive - and better formed - smaller, better-named methods & classes.
For my current project (which runs on a relatively heavyweight process), I have adopted a peculiar form of TDD that consists of writing skeleton test cases based on requirements documents and GUI mockups. I write dozens, sometimes hundreds of those before starting to implement anything (this runs totally against "pure" TDD which says you should write a few tests, then immediately start on a skeleton implementation).
I have found this to be an excellent way to review the requirements documents. I have to think about the behaviour described in them much more intensively than if I just were to read them . In consequence, I find many more inconsistencies and gaps in them which I would otherwise only have found during implementation. This way, I can ask for clarification earlier and have better requirements when I start implementing.
Then, during implementation, the tests are a way to measure how far I've yet to go. And they prevent me from forgetting anything (don't laugh, that's a real problem when you work on larger use cases).
And the moral is: even when your dev process doesn't really support TDD, it can still be done in a way, and improve quality and productivity.
I personally do not use TDD, but one of the biggest pro's I can see with the methology is that customer satisfaction ensurance. Basically, the idea is that the steps of your development process are these:
1) Talk to customer about what the application is supposed to do, and how it is supposed to react to different situations.
2) Translate the outcome of 1) into Unit Tests, which each test one feature or scenario.
3) Write simple, "sloppy" code that (barely) passes the tests. When this is done, you have met your customer's expectations.
4) Refactor the code you wrote in 3) until you think you've done it in the most effective way possible.
When this is done you have hopefully produced high-quality code, that meets your customer's needs. If the customer now wants a new feature, you start the cycle over - discuss the feature, write a test that makes sure it works, write code that passes the test, refactor.
And as others have said, each time you run your tests you ensure that the old code still works, and that you can add new functionality without breaking old one.
Most of the people I have talked to don't use a complete TDD model. They usually find the best testing model that works for them. Find yours play with TDD and find where you are the most productive.
TDD (Test Driven Development/ Design) provides the following advantages
ensures you know the story card's acceptance criteria before you start
ensures that you know when to stop coding (i.e., when the acceptance criteria has been meet thus prevents gold platting)
As a result you end up with code that is
testable
clean design
able to be refactored with confidence
the minimal code necessary to satisfy the story card
a living specification of how the code works
able to support a sustainable pace of new features
I made a big effort to learn TDD for Ruby on Rails development. It took several days before I really got into it and it. I was very skeptical but I made the effort because programmers I respect support it.
At this point I feel it was definitely worth the effort. There are several benefits which I'm sure others will be happy to list for you. To me the most important advantage is that it helps avoid that nightmare situation late in a project where something suddenly breaks for no apparent reason and then you're spending a day and a half with the debugger. It helps prevent your code base from deteriorating as you add more and more logic to it.
It is common knowledge that writing tests and having a large number of automated tests are a Good Thing.
However, without TDD, it often just becomes tedious. People write tests, and then leave it, and the tests do not get updated as they should, nor do new features get tested as often as they should either.
A big part of this is because the code has become a pain to test - TDD will influence your design so that it is much easier to test. Because you've used TDD, you have a good number of tests, which makes it much easier to find regressions whenever your code or requirements change, simplifying debugging drammatically, causing an appreciation of good TDD and encouraging more tests to be written when changes are needed - and we're back to the start of the cycle.
There are many advantages:
Higher code quality
Fewer bugs
Less wasted time
Any of those alone would be sufficient justification to implement TDD.

Beginning TDD - Challenges? Solutions? Recommendations? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
OK, I know there have already been questions about getting started with TDD.. However, I guess I kind of know the general concensus is to just do it , However, I seem to have the following problems getting my head into the game:
When working with collections, do will still test for obvious add/remove/inserts successful, even when based on Generics etc where we kind of "know" its going to work?
Some tests seem to take forever to implement.. Such as when working with string output, is there a "better" way to go about this sort of thing? (e.g. test the object model before parsing, break parsing down into small ops and test there) In my mind you should always test the "end result" but that can vary wildly and be tedious to set up.
I don't have a testing framework to use (work wont pay for one) so I can "practice" more. Are there any good ones that are free for commercial use? (at the moment I am using good 'ol Debug.Assert :)
Probably the biggest.. Sometimes I don't know what to expect NOT to happen.. I mean, you get your green light but I am always concerned that I may be missing a test.. Do you dig deeper to try and break the code, or leave it be and wait for it all fall over later (which will cost more)..
So basically what I am looking for here is not a " just do it " but more " I did this, had problems with this, solved them by this ".. The personal experience :)
First, it is alright and normal to feel frustrated when you first start trying to use TDD in your coding style. Just don't get discouraged and quit, you will need to give it some time. It is a major paradigm shift in how we think about solving a problem in code. I like to think of it like when we switched from procedural to object oriented programming.
Secondly, I feel that test driven development is first and foremost a design activity that is used to flesh out the design of a component by creating a test that first describes the API it is going to expose and how you are going to consume it's functionality. The test will help shape and mold the System Under Test until you have been able to encapsulate enough functionality to satisfy whatever tasks you happen to be working on.
Taking the above paragraph in mind, let's look at your questions:
If I am using a collection in my system under test, then I will setup an expectation to make sure that the code was called to insert the item and then assert the count of the collection. I don't necessarily test the Add method on my internal list. I just make sure it was called when the method that adds the item is called. I do this by adding a mocking framework into the mix, with my testing framework.
Testing strings as output can be tedious. You cannot account for every outcome. You can only test what you expect based on the functionality of the system under test. You should always break your tests down to the smallest element that it is testing. Which means you will have a lot of tests, but tests that are small and fast and only test what they should, nothing else.
There are a lot of open source testing frameworks to choose from. I am not going to argue which is best. Just find one you like and start using it.
MbUnit
nUnit
xUnit
All you can do is setup your tests to account for what you want to happen. If a scenario comes up that introduces a bug in your functionality, at least you have a test around the functionality to add that scenario into the test and then change your functionality until the test passes. One way to find where we may have missed a test is to use code coverage.
I introduced you to the mocking term in the answer for question one. When you introduce mocking into your arsenal for TDD, it dramatically makes testing easier to abstract away the parts that are not part of the system under test. Here are some resources on the mocking frameworks out there are:
Moq: Open Source
RhinoMocks: Open Source
TypeMock: Commercial Product
NSubstitute: Open Source
One way to help in using TDD, besides reading about the process, is to watch people do it. I recommend in watching the screen casts by JP Boodhoo on DNRTV. Check these out:
Jean Paul Boodhoo on Test Driven Development Part 1
Jean Paul Boodhoo on Test Driven Development Part 2
Jean Paul Boodhoo on Demystifying Design Patterns Part 1
Jean Paul Boodhoo on Demystifying Design Patterns Part 2
Jean Paul Boodhoo on Demystifying Design Patterns Part 3
Jean Paul Boodhoo on Demystifying Design Patterns Part 4
Jean Paul Boodhoo on Demystifying Design Patterns Part 5
OK, these will help you see how the terms I introduced are used. It will also introduce another tool called Resharper and how it can facilitate the TDD process. I couldn't recommend this tool enough when doing TDD. Seems like you are learning the process and you are just finding some of the problems that have already been solved with using other tools.
I think I would be doing an injustice to the community, if I didn't update this by adding Kent Beck's new series on Test Driven Development on Pragmatic Programmer.
From my own experience:
Only test your own code, not the underlying framework's code. So if you're using a generic list then there's no need to test Add, Remove etc.
There is no 2. Look over there! Monkeys!!!
NUnit is the way to go.
You definitely can't test every outcome. I test for what I expect to happen, and then test a few edge cases where I expect to get exceptions or invalid responses. If a bug comes up down the track because of something you forgot to test, the first thing you should do (before trying to fix the bug) is write a test to prove that the bug exists.
My take on this is following:
+1 for not testing framework code, but you may still need to test classes derived from framework classes.
If some class/method is cumbersome to test it may be strong indication that something is wrong with desing. I try to follow "1 class - 1 responsibility, 1 method - 1 action" principle. That way you will be able to test complex methods much easier by doing that in smaller portions.
+1 for xUnit. For Java you may also consider TestNG.
TDD is not single event it is a process. So do not try to envision everything from the beginning, but make sure that every bug found in code is actually covered by test once discovered.
I think the most important thing with (and actually one of the great outcomes of, in a somewhat recursive manner) TDD is successful management of dependencies. You have to make sure that modules are tested in isolation with no elaborate setup needed. For example, if you're testing a component that eventually sends an email, make the email sender a dependency so that you can mock it in your tests.
This leads to a second point - mocks are your friends. Get familiarized with mocking frameworks and the style of tests they promote (behavioral, as opposed to the classic state based), and the design choices they encourage (The "Tell, don't ask" principle).
I found that the principles illustrated in the Three Index Cards to Easily Remember the Essence of TDD is a good guide.
Anyway, to answer your questions
You don't have to test something you "know" is going to work, unless you wrote it. You didn't write generics, Microsoft did ;)
If you need to do so much for your test, maybe your object/method is doing too much as well.
Download TestDriven.NET to immediately start unit testing on your Visual Studio, (except if it's an Express edition)
Just test the correct thing that will happen. You don't need to test everything that can go wrong: you have to wait for your tests to fail for that.
Seriously, just do it, dude. :)
I am no expert at TDD, by any means, but here is my view:
If it is completely trivial (getters/setters etc) do not test it, unless you don't have confidence in the code for some reason.
If it is a quite simple, but non-trivial method, test it. The test is probably easy to write anyway.
When it comes to what to expect not to happen, I would say that if a certain potential problem is the responsibility of the class you are testing, you need to test that it handles it correctly. If it is not the current class' responsibility, don't test it.
The xUnit testing frameworks are often free to use, so if you are a .Net guy, check out NUnit, and if Java is your thing check out JUnit.
The above advice is good, and if you want a list of free frameworks you have to look no farther than the xUnit Frameworks List on Wikipedia. Hope this helps :)
In my opinion (your mileage may vary):
1- If you didn't write it don't test it. If you wrote it and you don't have a test for it it doesn't exist.
3- As everyone's said, xUnit's free and great.
2 & 4- Deciding exactly what to test is one of those things you can debate about with yourself forever. I try to draw this line using the principles of design by contract. Check out 'Object Oriented Software Construction" or "The Pragmatic Programmer" for details on it.
Keep tests short, "atomic". Test the smallest assumption in each test. Make each TestMethod independent, for integration tests I even create a new database for each method. If you need to build some data for each test use an "Init" method. Use mocks to isolate the class your testing from it's dependencies.
I always think "what's the minimum amount of code I need to write to prove this works for all cases ?"
Over the last year I have become more and more convinced of the benefits of TDD.
The things that I have learned along the way:
1) dependency injection is your friend. I'm not talking about inversion of control containers and frameworks to assemble plugin architectures, just passing dependencies into the constructor of the object under test. This pays back huge dividends in the testability of your code.
2) I set out with the passion / zealotry of the convert and grabbed a mocking framework and set about using mocks for everything I could. This led to brittle tests that required lots of painful set up and would fall over as soon as I started any refactoring. Use the correct kind of test double. Fakes where you just need to honour an interface, stubs to feed data back to the object under test, mock only where you care about interaction.
3) Test should be small. Aim for one assertion or interaction being tested in each test. I try to do this and mostly I'm there. This is about robustness of test code and also about the amount of complexity in a test when you need to revisit it later.
The biggest problem I have had with TDD has been working with a specification from a standards body and a third party implementation of that standard that was the de-facto standard. I coded lots of really nice unit tests to the letter of the specification only to find that the implementation on the other side of the fence saw the standard as more of an advisory document. They played quite loose with it. The only way to fix this was to test with the implementation as well as the unit tests and refactor the tests and code as necessary. The real problem was the belief on my part that as long as I had code and unit tests all was good. Not so. You need to be building actual outputs and performing functional testing at the same time as you are unit testing. Small pieces of benefit all the way through the process - into users or stakeholders hands.
Just as an addition to this, I thought I would say I have put a blog post up on my thoughts on getting started with testing (following this discussion and my own research), since it may be useful to people viewing this thread.
"TDD – Getting Started with Test-Driven Development" - I have got some great feedback so far and would really appreciate any more that you guys have to offer.
I hope this helps! :)

What is unit testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I saw many questions asking 'how' to unit test in a specific language, but no question asking 'what', 'why', and 'when'.
What is it?
What does it do for me?
Why should I use it?
When should I use it (also when not)?
What are some common pitfalls and misconceptions
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Note that if your test code writes to a file, opens a database connection or does something over the network, it's more appropriately categorized as an integration test. Integration tests are a good thing, but should not be confused with unit tests. Unit test code should be short, sweet and quick to execute.
Another way to look at unit testing is that you write the tests first. This is known as Test-Driven Development (TDD for short). TDD brings additional advantages:
You don't write speculative "I might need this in the future" code -- just enough to make the tests pass
The code you've written is always covered by tests
By writing the test first, you're forced into thinking about how you want to call the code, which usually improves the design of the code in the long run.
If you're not doing unit testing now, I recommend you get started on it. Get a good book, practically any xUnit-book will do because the concepts are very much transferable between them.
Sometimes writing unit tests can be painful. When it gets that way, try to find someone to help you, and resist the temptation to "just write the damn code". Unit testing is a lot like washing the dishes. It's not always pleasant, but it keeps your metaphorical kitchen clean, and you really want it to be clean. :)
Edit: One misconception comes to mind, although I'm not sure if it's so common. I've heard a project manager say that unit tests made the team write all the code twice. If it looks and feels that way, well, you're doing it wrong. Not only does writing the tests usually speed up development, but it also gives you a convenient "now I'm done" indicator that you wouldn't have otherwise.
I don't disagree with Dan (although a better choice may just be not to answer)...but...
Unit testing is the process of writing code to test the behavior and functionality of your system.
Obviously tests improve the quality of your code, but that's just a superficial benefit of unit testing. The real benefits are to:
Make it easier to change the technical implementation while making sure you don't change the behavior (refactoring). Properly unit tested code can be aggressively refactored/cleaned up with little chance of breaking anything without noticing it.
Give developers confidence when adding behavior or making fixes.
Document your code
Indicate areas of your code that are tightly coupled. It's hard to unit test code that's tightly coupled
Provide a means to use your API and look for difficulties early on
Indicates methods and classes that aren't very cohesive
You should unit test because its in your interest to deliver a maintainable and quality product to your client.
I'd suggest you use it for any system, or part of a system, which models real-world behavior. In other words, it's particularly well suited for enterprise development. I would not use it for throw-away/utility programs. I would not use it for parts of a system that are problematic to test (UI is a common example, but that isn't always the case)
The greatest pitfall is that developers test too large a unit, or they consider a method a unit. This is particularly true if you don't understand Inversion of Control - in which case your unit tests will always turn into end-to-end integration testing. Unit test should test individual behaviors - and most methods have many behaviors.
The greatest misconception is that programmers shouldn't test. Only bad or lazy programmers believe that. Should the guy building your roof not test it? Should the doctor replacing a heart valve not test the new valve? Only a programmer can test that his code does what he intended it to do (QA can test edge cases - how code behaves when it's told to do things the programmer didn't intend, and the client can do acceptance test - does the code do what what the client paid for it to do)
The main difference of unit testing, as opposed to "just opening a new project and test this specific code" is that it's automated, thus repeatable.
If you test your code manually, it may convince you that the code is working perfectly - in its current state. But what about a week later, when you made a slight modification in it? Are you willing to retest it again by hand whenever anything changes in your code? Most probably not :-(
But if you can run your tests anytime, with a single click, exactly the same way, within a few seconds, then they will show you immediately whenever something is broken. And if you also integrate the unit tests into your automated build process, they will alert you to bugs even in cases where a seemingly completely unrelated change broke something in a distant part of the codebase - when it would not even occur to you that there is a need to retest that particular functionality.
This is the main advantage of unit tests over hand testing. But wait, there is more:
unit tests shorten the development feedback loop dramatically: with a separate testing department it may take weeks for you to know that there is a bug in your code, by which time you have already forgotten much of the context, thus it may take you hours to find and fix the bug; OTOH with unit tests, the feedback cycle is measured in seconds, and the bug fix process is typically along the lines of an "oh sh*t, I forgot to check for that condition here" :-)
unit tests effectively document (your understanding of) the behaviour of your code
unit testing forces you to reevaluate your design choices, which results in simpler, cleaner design
Unit testing frameworks, in turn, make it easy for you to write and run your tests.
I was never taught unit testing at university, and it took me a while to "get" it. I read about it, went "ah, right, automated testing, that could be cool I guess", and then I forgot about it.
It took quite a bit longer before I really figured out the point: Let's say you're working on a large system and you write a small module. It compiles, you put it through its paces, it works great, you move on to the next task. Nine months down the line and two versions later someone else makes a change to some seemingly unrelated part of the program, and it breaks the module. Worse, they test their changes, and their code works, but they don't test your module; hell, they may not even know your module exists.
And now you've got a problem: broken code is in the trunk and nobody even knows. The best case is an internal tester finds it before you ship, but fixing code that late in the game is expensive. And if no internal tester finds it...well, that can get very expensive indeed.
The solution is unit tests. They'll catch problems when you write code - which is fine - but you could have done that by hand. The real payoff is that they'll catch problems nine months down the line when you're now working on a completely different project, but a summer intern thinks it'll look tidier if those parameters were in alphabetical order - and then the unit test you wrote way back fails, and someone throws things at the intern until he changes the parameter order back. That's the "why" of unit tests. :-)
Chipping in on the philosophical pros of unit testing and TDD here are a few of they key "lightbulb" observations which struck me on my tentative first steps on the road to TDD enlightenment (none original or necessarily news)...
TDD does NOT mean writing twice the amount of code. Test code is typically fairly quick and painless to write and is a key part of your design process and critically.
TDD helps you to realize when to stop coding! Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing.
The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of BOTH being bad / buggy being fairly low. Often its the test that needs fixing but that's still a good outcome.
TDD helps with coding constipation. You know that feeling that you have so much to do you barely know where to start? It's Friday afternoon, if you just procrastinate for a couple more hours... TDD allows you to flesh out very quickly what you think you need to do, and gets your coding moving quickly. Also, like lab rats, I think we all respond to that big green light and work harder to see it again!
In a similar vein, these designer types can SEE what they're working on. They can wander off for a juice / cigarette / iphone break and return to a monitor that immediately gives them a visual cue as to where they got to. TDD gives us something similar. It's easier to see where we got to when life intervenes...
I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interprete this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete.
TDD helps in all kinds of surprising ways down the line. Good unit tests can help document what something is supposed to do, they can help you migrate code from one project to another and give you an unwarranted feeling of superiority over your non-testing colleagues :)
This presentation is an excellent introduction to all the yummy goodness testing entails.
I would like to recommend the xUnit Testing Patterns book by Gerard Meszaros. It's large but is a great resource on unit testing. Here is a link to his web site where he discusses the basics of unit testing. http://xunitpatterns.com/XUnitBasics.html
I use unit tests to save time.
When building business logic (or data access) testing functionality can often involve typing stuff into a lot of screens that may or may not be finished yet. Automating these tests saves time.
For me unit tests are a kind of modularised test harness. There is usually at least one test per public function. I write additional tests to cover various behaviours.
All the special cases that you thought of when developing the code can be recorded in the code in the unit tests. The unit tests also become a source of examples on how to use the code.
It is a lot faster for me to discover that my new code breaks something in my unit tests then to check in the code and have some front-end developer find a problem.
For data access testing I try to write tests that either have no change or clean up after themselves.
Unit tests aren’t going to be able to solve all the testing requirements. They will be able to save development time and test core parts of the application.
This is my take on it. I would say unit testing is the practice of writing software tests to verify that your real software does what it is meant to. This started with jUnit in the Java world and has become a best practice in PHP as well with SimpleTest and phpUnit. It's a core practice of Extreme Programming and helps you to be sure that your software still works as intended after editing. If you have sufficient test coverage, you can do major refactoring, bug fixing or add features rapidly with much less fear of introducing other problems.
It's most effective when all unit tests can be run automatically.
Unit testing is generally associated with OO development. The basic idea is to create a script which sets up the environment for your code and then exercises it; you write assertions, specify the intended output that you should receive and then execute your test script using a framework such as those mentioned above.
The framework will run all the tests against your code and then report back success or failure of each test. phpUnit is run from the Linux command line by default, though there are HTTP interfaces available for it. SimpleTest is web-based by nature and is much easier to get up and running, IMO. In combination with xDebug, phpUnit can give you automated statistics for code coverage which some people find very useful.
Some teams write hooks from their subversion repository so that unit tests are run automatically whenever you commit changes.
It's good practice to keep your unit tests in the same repository as your application.
LibrarIES like NUnit, xUnit or JUnit are just mandatory if you want to develop your projects using the TDD approach popularized by Kent Beck:
You can read Introduction to Test Driven Development (TDD) or Kent Beck's book Test Driven Development: By Example.
Then, if you want to be sure your tests cover a "good" part of your code, you can use software like NCover, JCover, PartCover or whatever. They'll tell you the coverage percentage of your code. Depending on how much you're adept at TDD, you'll know if you've practiced it well enough :)
Unit-testing is the testing of a unit of code (e.g. a single function) without the need for the infrastructure that that unit of code relies on. i.e. test it in isolation.
If, for example, the function that you're testing connects to a database and does an update, in a unit test you might not want to do that update. You would if it were an integration test but in this case it's not.
So a unit test would exercise the functionality enclosed in the "function" you're testing without side effects of the database update.
Say your function retrieved some numbers from a database and then performed a standard deviation calculation. What are you trying to test here? That the standard deviation is calculated correctly or that the data is returned from the database?
In a unit test you just want to test that the standard deviation is calculated correctly. In an integration test you want to test the standard deviation calculation and the database retrieval.
Unit testing is about writing code that tests your application code.
The Unit part of the name is about the intention to test small units of code (one method for example) at a time.
xUnit is there to help with this testing - they are frameworks that assist with this. Part of that is automated test runners that tell you what test fail and which ones pass.
They also have facilities to setup common code that you need in each test before hand and tear it down when all tests have finished.
You can have a test to check that an expected exception has been thrown, without having to write the whole try catch block yourself.
I think the point that you don't understand is that unit testing frameworks like NUnit (and the like) will help you in automating small to medium-sized tests. Usually you can run the tests in a GUI (that's the case with NUnit, for instance) by simply clicking a button and then - hopefully - see the progress bar stay green. If it turns red, the framework shows you which test failed and what exactly went wrong. In a normal unit test, you often use assertions, e.g. Assert.AreEqual(expectedValue, actualValue, "some description") - so if the two values are unequal you will see an error saying "some description: expected <expectedValue> but was <actualValue>".
So as a conclusion unit testing will make testing faster and a lot more comfortable for developers. You can run all the unit tests before committing new code so that you don't break the build process of other developers on the same project.
Use Testivus. All you need to know is right there :)
Unit testing is a practice to make sure that the function or module which you are going to implement is going to behave as expected (requirements) and also to make sure how it behaves in scenarios like boundary conditions, and invalid input.
xUnit, NUnit, mbUnit, etc. are tools which help you in writing the tests.
Test Driven Development has sort of taken over the term Unit Test. As an old timer I will mention the more generic definition of it.
Unit Test also means testing a single component in a larger system. This single component could be a dll, exe, class library, etc. It could even be a single system in a multi-system application. So ultimately Unit Test ends up being the testing of whatever you want to call a single piece of a larger system.
You would then move up to integrated or system testing by testing how all the components work together.
First of all, whether speaking about Unit testing or any other kinds of automated testing (Integration, Load, UI testing etc.), the key difference from what you suggest is that it is automated, repeatable and it doesn't require any human resources to be consumed (= nobody has to perform the tests, they usually run at a press of a button).
I went to a presentation on unit testing at FoxForward 2007 and was told never to unit test anything that works with data. After all, if you test on live data, the results are unpredictable, and if you don't test on live data, you're not actually testing the code you wrote. Unfortunately, that's most of the coding I do these days. :-)
I did take a shot at TDD recently when I was writing a routine to save and restore settings. First, I verified that I could create the storage object. Then, that it had the method I needed to call. Then, that I could call it. Then, that I could pass it parameters. Then, that I could pass it specific parameters. And so on, until I was finally verifying that it would save the specified setting, allow me to change it, and then restore it, for several different syntaxes.
I didn't get to the end, because I needed-the-routine-now-dammit, but it was a good exercise.
What do you do if you are given a pile of crap and seem like you are stuck in a perpetual state of cleanup that you know with the addition of any new feature or code can break the current set because the current software is like a house of cards?
How can we do unit testing then?
You start small. The project I just got into had no unit testing until a few months ago. When coverage was that low, we would simply pick a file that had no coverage and click "add tests".
Right now we're up to over 40%, and we've managed to pick off most of the low-hanging fruit.
(The best part is that even at this low level of coverage, we've already run into many instances of the code doing the wrong thing, and the testing caught it. That's a huge motivator to push people to add more testing.)
This answers why you should be doing unit testing.
The 3 videos below cover unit testing in javascript but the general principles apply across most languages.
Unit Testing: Minutes Now Will Save Hours Later - Eric Mann - https://www.youtube.com/watch?v=_UmmaPe8Bzc
JS Unit Testing (very good) - https://www.youtube.com/watch?v=-IYqgx8JxlU
Writing Testable JavaScript - https://www.youtube.com/watch?v=OzjogCFO4Zo
Now I'm just learning about the subject so I may not be 100% correct and there's more to it than what I'm describing here but my basic understanding of unit testing is that you write some test code (which is kept separate from your main code) that calls a function in your main code with input (arguments) that the function requires and the code then checks if it gets back a valid return value. If it does get back a valid value the unit testing framework that you're using to run the tests shows a green light (all good) if the value is invalid you get a red light and you then can fix the problem straight away before you release the new code to production, without testing you may actually not have caught the error.
So you write tests for you current code and create the code so that it passes the test. Months later you or someone else need to modify the function in your main code, because earlier you had already written test code for that function you now run again and the test may fail because the coder introduced a logic error in the function or return something completely different than what that function is supposed to return. Again without the test in place that error might be hard to track down as it can possibly affect other code as well and will go unnoticed.
Also the fact that you have a computer program that runs through your code and tests it instead of you manually doing it in the browser page by page saves time (unit testing for javascript). Let's say that you modify a function that is used by some script on a web page and it works all well and good for its new intended purpose. But, let's also say for arguments sake that there is another function you have somewhere else in your code that depends on that newly modified function for it to operate properly. This dependent function may now stop working because of the changes that you've made to the first function, however without tests in place that are run automatically by your computer you will not notice that there's a problem with that function until it is actually executed and you'll have to manually navigate to a web page that includes the script which executes the dependent function, only then you notice that there's a bug because of the change that you made to the first function.
To reiterate, having tests that are run while developing your application will catch these kinds of problems as you're coding. Not having the tests in place you'd have to manually go through your whole application and even then it can be hard to spot the bug, naively you send it out into production and after a while a kind user sends you a bug report (which won't be as good as your error messages in a testing framework).
It's quite confusing when you first hear of the subject and you think to yourself, am I not already testing my code? And the code that you've written is working like it is supposed to already, "why do I need another framework?"... Yes you are already testing your code but a computer is better at doing it. You just have to write good enough tests for a function/unit of code once and the rest is taken care of for you by the mighty cpu instead of you having to manually check that all of your code is still working when you make a change to your code.
Also, you don't have to unit test your code if you don't want to but it pays off as your project/code base starts to grow larger as the chances of introducing bugs increases.
Unit-testing and TDD in general enables you to have shorter feedback cycles about the software you are writing. Instead of having a large test phase at the very end of the implementation, you incrementally test everything you write. This increases code quality very much, as you immediately see, where you might have bugs.