Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
anti-pattern : there must be at least two key elements present to formally distinguish an actual anti-pattern from a simple bad habit, bad practice, or bad idea:
Some repeated pattern of action, process or structure that initially appears to be beneficial, but ultimately produces more bad consequences than beneficial results, and
A refactored solution that is clearly documented, proven in actual practice and repeatable.
Vote for the TDD anti-pattern that you have seen "in the wild" one time too many.
The blog post by James Carr and Related discussion on testdrivendevelopment yahoogroup
If you've found an 'unnamed' one.. post 'em too. One post per anti-pattern please to make the votes count for something.
My vested interest is to find the top-n subset so that I can discuss 'em in a lunchbox meet in the near future.
Second Class Citizens - test code isn't as well refactored as production code, containing a lot of duplicated code, making it hard to maintain tests.
The Free Ride / Piggyback -- James Carr, Tim Ottinger
Rather than write a new test case method to test another/distinct feature/functionality, a new assertion (and its corresponding actions i.e. Act steps from AAA) rides along in an existing test case.
Happy Path
The test stays on happy paths (i.e. expected results) without testing for boundaries and exceptions.
JUnit Antipatterns
The Local Hero
A test case that is dependent on something specific to the development environment it was written on in order to run. The result is the test passes on development boxes, but fails when someone attempts to run it elsewhere.
The Hidden Dependency
Closely related to the local hero, a unit test that requires some existing data to have been populated somewhere before the test runs. If that data wasn’t populated, the test will fail and leave little indication to the developer what it wanted, or why… forcing them to dig through acres of code to find out where the data it was using was supposed to come from.
Sadly seen this far too many times with ancient .dlls which depend on nebulous and varied .ini files which are constantly out of sync on any given production system, let alone extant on your machine without extensive consultation with the three developers responsible for those dlls. Sigh.
Chain Gang
A couple of tests that must run in a certain order, i.e. one test changes the global state of the system (global variables, data in the database) and the next test(s) depends on it.
You often see this in database tests. Instead of doing a rollback in teardown(), tests commit their changes to the database. Another common cause is that changes to the global state aren't wrapped in try/finally blocks which clean up should the test fail.
The Mockery
Sometimes mocking can be good, and handy. But sometimes developers can lose themselves and in their effort to mock out what isn’t being tested. In this case, a unit test contains so many mocks, stubs, and/or fakes that the system under test isn’t even being tested at all, instead data returned from mocks is what is being tested.
Source: James Carr's post.
The Silent Catcher -- Kelly?
A test that passes if an exception is thrown.. even if the exception that actually occurs is one that is different than the one the developer intended.
See Also: Secret Catcher
[Test]
[ExpectedException(typeof(Exception))]
public void ItShouldThrowDivideByZeroException()
{
// some code that throws another exception yet passes the test
}
The Inspector
A unit test that violates encapsulation in an effort to achieve 100% code coverage, but knows so much about what is going on in the object that any attempt to refactor will break the existing test and require any change to be reflected in the unit test.
'how do I test my member variables without making them public... just for unit-testing?'
Excessive Setup -- James Carr
A test that requires a huge setup in order to even begin testing. Sometimes several hundred lines of code are used to prepare the environment for one test, with several objects involved, which can make it difficult to really ascertain what is tested due to the “noise” of all of the setup going on. (Src: James Carr's post)
Anal Probe
A test which has to use insane, illegal or otherwise unhealthy ways to perform its task like: Reading private fields using Java's setAccessible(true) or extending a class to access protected fields/methods or having to put the test in a certain package to access package global fields/methods.
If you see this pattern, the classes under test use too much data hiding.
The difference between this and The Inspector is that the class under test tries to hide even the things you need to test. So your goal is not to achieve 100% test coverage but to be able to test anything at all. Think of a class that has only private fields, a run() method without arguments and no getters at all. There is no way to test this without breaking the rules.
Comment by Michael Borgwardt: This is not really a test antipattern, it's pragmatism to deal with deficiencies in the code being tested. Of course it's better to fix those deficiencies, but that may not be possible in the case of 3rd party libraries.
Aaron Digulla: I kind of agree. Maybe this entry is really better suited for a "JUnit HOWTO" wiki and not an antipattern. Comments?
The Test With No Name -- Nick Pellow
The test that gets added to reproduce a specific bug in the bug tracker and whose author thinks does not warrant a name of its own. Instead of enhancing an existing, lacking test, a new test is created called testForBUG123.
Two years later, when that test fails, you may need to first try and find BUG-123 in your bug tracker to figure out the test's intent.
The Slow Poke
A unit test that runs incredibly slow. When developers kick it off, they have time to go to the bathroom, grab a smoke, or worse, kick the test off before they go home at the end of the day. (Src: James Carr's post)
a.k.a. the tests that won't get run as frequently as they should
The Butterfly
You have to test something which contains data that changes all the time, like a structure which contains the current date, and there is no way to nail the result down to a fixed value. The ugly part is that you don't care about this value at all. It just makes your test more complicated without adding any value.
The bat of its wing can cause a hurricane on the other side of the world. -- Edward Lorenz, The Butterfly Effect
The Flickering Test (Source : Romilly Cocking)
A test which just occasionally fails, not at specific times, and is generally due to race conditions within the test. Typically occurs when testing something that is asynchronous, such as JMS.
Possibly a super set to the 'Wait and See' anti-pattern and 'The Sleeper' anti-pattern.
The build failed, oh well, just run the build again. -- Anonymous Developer
Wait and See
A test that runs some set up code and then needs to 'wait' a specific amount of time before it can 'see' if the code under test functioned as expected. A testMethod that uses Thread.sleep() or equivalent is most certainly a "Wait and See" test.
Typically, you may see this if the test is testing code which generates an event external to the system such as an email, an http request or writes a file to disk.
Such a test may also be a Local Hero since it will FAIL when run on a slower box or an overloaded CI server.
The Wait and See anti-pattern is not to be confused with The Sleeper.
Inappropriately Shared Fixture -- Tim Ottinger
Several test cases in the test fixture do not even use or need the setup / teardown. Partly due to developer inertia to create a new test fixture... easier to just add one more test case to the pile
The Giant
A unit test that, although it is validly testing the object under test, can span thousands of lines and contain many many test cases. This can be an indicator that the system under tests is a God Object (James Carr's post).
A sure sign for this one is a test that spans more than a a few lines of code. Often, the test is so complicated that it starts to contain bugs of its own or flaky behavior.
I'll believe it when I see some flashing GUIs
An unhealthy fixation/obsession with testing the app via its GUI 'just like a real user'
Testing business rules through the GUI
is a terrible form of coupling. If
you write thousands of tests through
the GUI, and then change your GUI,
thousands of tests break.
Rather, test only GUI things through the GUI, and couple the
GUI to a dummy system instead of the
real system, when you run those tests.
Test business rules through an API
that doesn't involve the GUI. -- Bob Martin
“You must understand that seeing is believing, but also know that believing is seeing.” -- Denis Waitley
The Sleeper, aka Mount Vesuvius -- Nick Pellow
A test that is destined to FAIL at some specific time and date in the future. This often is caused by incorrect bounds checking when testing code which uses a Date or Calendar object. Sometimes, the test may fail if run at a very specific time of day, such as midnight.
'The Sleeper' is not to be confused with the 'Wait And See' anti-pattern.
That code will have been replaced long before the year 2000 -- Many developers in 1960
The Dead Tree
A test which where a stub was created, but the test wasn't actually written.
I have actually seen this in our production code:
class TD_SomeClass {
public void testAdd() {
assertEquals(1+1, 2);
}
}
I don't even know what to think about that.
got bit by this today:
Wet Floor:
The test creates data that is persisted somewhere, but the test does not clean up when finished. This causes tests (the same test, or possibly other tests) to fail on subsequent test runs.
In our case, the test left a file lying around in the "temp" dir, with permissions from the user that ran the test the first time. When a different user tried to test on the same machine: boom. In the comments on James Carr's site, Joakim Ohlrogge referred to this as the "Sloppy Worker", and it was part of the inspiration for "Generous Leftovers". I like my name for it better (less insulting, more familiar).
The Cuckoo -- Frank Carver
A unit test which sits in a test case with several others, and enjoys the same (potentially lengthy) setup process as the other tests in the test case, but then discards some or all of the artifacts from the setup and creates its own.
Advanced Symptom of : Inappropriately Shared Fixture
The Secret Catcher -- Frank Carver
A test that at first glance appears to be doing no testing, due to absence of assertions. But "The devil is in the details".. the test is really relying on an exception to be thrown and expecting the testing framework to capture the exception and report it to the user as a failure.
[Test]
public void ShouldNotThrow()
{
DoSomethingThatShouldNotThrowAnException();
}
The Environmental Vandal
A 'unit' test which for various 'requirements' starts spilling out into its environment, using and setting environment variables / ports. Running two of these tests simultaneously will cause 'unavailable port' exceptions etc.
These tests will be intermittent, and leave developers saying things like 'just run it again'.
One solution Ive seen is to randomly select a port number to use. This reduces the possibility of a conflict, but clearly doesnt solve the problem. So if you can, always mock the code so that it doesn't actually allocate the unsharable resource.
The Turing Test
A testcase automagically generated by some expensive tool that has many, many asserts gleaned from the class under test using some too-clever-by-half data flow analysis. Lulls developers into a false sense of confidence that their code is well tested, absolving them from the responsibility of designing and maintaining high quality tests. If the machine can write the tests for you, why can't it pull its finger out and write the app itself!
Hello stupid. -- World's smartest computer to new apprentice (from an old Amiga comic).
The Forty Foot Pole Test
Afraid of getting too close to the class they are trying to test, these tests act at a distance, separated by countless layers of abstraction and thousands of lines of code from the logic they are checking. As such they are extremely brittle, and susceptible to all sorts of side-effects that happen on the epic journey to and from the class of interest.
Doppelgänger
In order to test something, you have to copy parts of the code under test into a new class with the same name and package and you have to use classpath magic or a custom classloader to make sure it is visible first (so your copy is picked up).
This pattern indicates an unhealthy amount of hidden dependencies which you can't control from a test.
I looked at his face ... my face! It was like a mirror but made my blood freeze.
The Mother Hen -- Frank Carver
A common setup which does far more than the actual test cases need. For example creating all sorts of complex data structures populated with apparently important and unique values when the tests only assert for presence or absence of something.
Advanced Symptom of: Inappropriately Shared Fixture
I don't know what it does ... I'm adding it anyway, just in case. -- Anonymous Developer
The Test It All
I can't believe this hasn't been mentioned till now, but tests should not break the Single Responsibility Principle.
I have come across this so many times, tests that break this rule are by definition a nightmare to maintain.
Line hitter
On the first look tests covers everything and code coverage tools confirms it with 100%, but in reality tests only hit code without any output analyses.
coverage-vs-reachable-code
Related
Code evolves, and as it does, it also decays if not pruned, a bit like a garden in that respect. Pruning mean refactoring to make it fulfill its evolving purpose.
Refactoring is much safer if we have a good unit test coverage.
Test-driven development forces us to write the test code first, before the production code. Hence, we can't test the implementation, because there isn't any. This makes it much easier to refactor the production code.
The TDD cycle is something like this: write a test, test fails, write production code until the test succeeds, refactor the code.
But from what I've seen, people refactor the production code, but not the test code. As test code decays, the production code will go stale and then everything goes downhill. Therefore, I think it is necessary to refactor test code.
Here's the problem: How do you ensure that you don't break the test code when you refactor it?
(I've done one approach, https://thecomsci.wordpress.com/2011/12/19/double-dabble/, but I think there might be a better way.)
Apparently there's a book, http://www.amazon.com/dp/0131495054, which I haven't read yet.
There's also a Wiki page about this, http://c2.com/cgi/wiki?RefactoringTestCode, which doesn't have a solution.
Refactoring your tests is a two step process. Simply stated: First you must use your application under test to ensure that the tests pass while refactoring. Then, after your refactored tests are green, you must ensure that they will fail. However to do this properly requires some specific steps.
In order to properly test your refactored tests, you must change the application under test to cause the test to fail. Only that test condition should fail. That way you can ensure that the test is failing properly in addition to passing. You should strive for a single test failure, but that will not be possible in some cases (i.e. not unit tests). However if you are refactoring correctly there will be a single failure in the refactored tests, and the other failures will exist in tests not related to the current refactoring. Understanding your codebase is required to properly identify cascading failures of this type and failures of this type only apply to tests other than unit tests.
I think you should not change your test code.
Why?
In TDD, you define a interface for a class.
This interface contains methods that are defined with a certain set of functionality.The requirements / design.
First: These requirements do not change while refactoring your production code. Refactoring means: changing/cleaning the code without changing the functionality.
Second: The test checks a certain set of functionality, this set stays the same.
Conclusion: Refactoring test and refactoring your production code are two different things.
Tip:When write your tests, write clean code. Make small tests. Which really test one piece of the functionality.
But "Your design changes because of unforeseen changes to the requirements". This may lead or may not lead to changes in the interface.
When your requirements change, your tests must change. This is not avoidable.
You have to keep in mind that this is a new TDD cycle. First test the new functionality and remove the old functionality tests. Then implement the new design.
To make this work properly, you need clean and small tests.
Example:
MethodOne does: changeA and changeB
Don't put this in 1 unit test, but make a test class with 2 unit tests.
Both execute MethodOne, but they check for other results (changeA, changeB).
When the specification of changeA changes, you only need to rewrite 1 unit method.
When MethodOne gets a new specification changeC: Add a unit test.
With the above example your tests will be more agile and easier to change.
Summary:
Dont refactor your tests, when refactoring your production code.
Write clean and agile tests.
Hopes this helps.
Good luck with it.
#disclaimer: I do not want your money if this makes you rich.
How do you ensure that you don't break the test code when you refactor
it?
Rerunning the tests should suffice in most cases.
There are some other strategies described here but they might be overkill compared to the few benefits you get.
Um.
FOR JAVA SOLUTION! I don't know what language you're programming in!
Ok, I just read "Clean Code" by one of the Martins, a book which argues that the idea of refactoring test code to keep clean and readible is fine idea, nad indeed a goal. So the ambition to refactor and keep code clean is Good, not a silly idea like I first thought.
But that's not what you asked, so let's take a shot at answering!
I'd keep a db of your tests - or the last test result, anyway.
With a bit of java annotating, you can do something like this:
#SuperTestingFramerworkCapable
public class MyFancyTest {
#TestEntry
#Test
public testXEqualsYAfterConstructors(){
#TestElement
//create my object X
#TestElement
//create my object Y
#TheTest
AssertTrue(X.equals(Y));
}
}
ANYWAY, you'd also need a reflection and annotation-processing super class, that would inspect this code. It could just be an extra step in your processing - write tests, pass through this super processor, and then, if it passes, run the tests.
And your super processor is going to use a schema
MyFancyTest
And for each member you have in your class, it will use a new table - here the (only) table would be testXEqualsYAfterConstructors
And that table would have columns for each item marked with the #TestElement annotation. And it would also have a column for #TheTest
I suppose you'd just call the columns TestElement1, TestElement2 etc etc
And THEN, once it had set all this up, it would just save the variable names and the line annotated #TheTest.
So the table would be
testXEqualsYAfterConstructors
TestElement1 | TestElement2 | TheTest
SomeObjectType X | SomeObjectType X | AssertTrue(X.equals(Y));
So, if the super processor goes and finds tables exist, then it can compare what is already there with what is now in the code, and it can raise an alert for each differing entry. And you can create a new user - an Admin - who can get the changes, and can check over them, crucible style, and ok or not them.
And then you can market this solution for this problem, sell you company for 100M and give me 20%
cheers!
Slow day, here's the rational:
yuor solution uses a lot of extra overhead, most damagingly, in the actual production code. Your prod code shouldn't be tied to your test code, ever, and it certainly shouldn't have random variable that are test specific in it.
The next suggestion I have with the code you put up is that your framework doesn't stop people breaking tests. After all, you can have this:
#Test
public void equalsIfSameObject()
{
Person expected = createPerson();
Person actual = expected;
check(Person.FEATURE_EQUAL_IF_SAME_OBJECT);
boolean isEqual = actual.equals(expected);
assertThat(isEqual).isTrue();
}
But if I change the last two lines of code in some "refactoring" of test classes, then your framework is going to report a success, but the test won't do anything. You really need to ensure that an alert is raised and people can look at the "difference".
Then again, you might just want to use svn or perforce and crucible to compare and check this stuff!
Also, seeing as you're keen on a New Idea, you'll want to read about local annotations:http://stackoverflow.com/questions/3285652/how-can-i-create-an-annotation-processor-that-processes-a-local-variable
Um, so you might need to get that guy's - see the last comment in the link above - you might need his custom java compiler too.
#Disclaimer
If you create a new company with code that pretty much follows the above, I reserve the right to 20% of the company if and when you're worth more than 30M, at a time of my choosing
About two months before your question was one of my main questions in refactoring. Just let me explain my experience:
when you want to refactor a method, you should cover it with unit tests(or any other tests) to be sure you are not breaking something during refactoring(in my case the team knew the code worked well because they had been using it for 6 years, they just needed to improve it, so all of my unit tests passed in first step).So in first step you have some passed unit tests that cover whole scenarios. If some of unit tests fails, firstly you should fix the problem to be sure your method works correctly.
after passing all of tests, you have refactored the method and you want to run your test to be sure every thing is right. Any changes in test codes?
you should write tests that are independent from internal structure of method. After refactoring, you should just change some small part of code and in the most of the time no changes are required, because refactoring just improves the structure and doesn't change the behavior. If your test code needed to change a lot, you never know if you've broke some things on main code during refactoring or not.
and most important thing for me is to remember in every test, one behavior should be considered
I hope I could explain well.
I was reading the Joel Test 2010 and it reminded me of an issue i had with unit testing.
How do i really unit test something? I dont unit test functions? only full classes? What if i have 15 classes that are <20lines. Should i write a 35line unit test for each class bringing 15*20 lines to 15*(20+35) lines (that's from 300 to 825, nearly 3x more code).
If a class is used by only two other classes in the module, should i unit test it or would the test against the other two classes suffice? what if they are all < 30lines of code should i bother?
If i write code to dump data and i never need to read it such as another app is used. The other app isnt command line or it is but no way to verify if the data is good. Do i still need to unit test it?
What if the app is a utility and the total is <500lines of code. Or is used that week and will be used in the future but always need to be reconfiguration because it is meant for a quick batch process and each project will require tweaks because the desire output is unchanged. (i'm trying to say theres no way around it, for valid reasons it will always be tweaked) do i unit test it and if so how? (maybe we dont care if we break a feature used in the past but not in the present or future).
etc.
I think this should be a wiki. Maybe people would like to say an exactly of what they should unit test (or should not)? maybe links to books are good. I tried one but it never clarified what should be unit tested, just the problems of writing unit testing and solutions.
Also if classes are meant to only be in that project (by design, spec or whatever other reason) and the class isnt useful alone (lets say it generates the html using data that returns html ready comments) do i really need to test it? say by checking if all public functions allow null comment objects when my project doesnt ever use null comment. Its those kind of things that make me wonder if i am unit testing the wrong code. Also tons of classes are throwaway when the project. Its the borderline throwaway or not very useful alone code which bothers me.
Here's what I'm hearing, whether you meant it this way or not: a whole litany of issues and excuses why unit testing might not be applicable to your code. In other words: "I don't see what I'll be getting out of unit tests, and they're a lot of bother to write; maybe they're not for me?"
You know what? You may be right. Unit tests are not a panacea. There are huge, wide swaths of testing that unit testing can't cover.
I think, though, that you're misestimating the cost of maintenance, and what things can break in your code. So here are my thoughts:
Should I test small classes? Yes, if there are things in that class that can possibly break.
Should I test functions? Yes, if there are things in this function that can possibly break. Why wouldn't you? Or is your concern over whether it's considered a unit or not? That's just quibbling over names, and shouldn't have any bearing on whether you should write unit tests for it! But it's common in my experience to see a method or function described as a unit under test.
Should I unit test a class if it's used by two other classes? Yes, if there's anything that can possibly break in that class. Should I test it separately? The advantage of doing so is to be able to isolate breakages straight down to the shared class, instead of hunting through the using classes to see if it was they that broke or one of their dependencies.
Should I test data output from my class if another program will read it? Hell yes, especially if that other program is a 3rd-party one! This is a great application of unit tests (or perhaps system tests, depending on the isolation involved in the test): to prove to yourself that the data you output is precisely what you think you should have output. I think you'll find that has the power to simplify support calls immeasurably. (Though please note it's not a substitute for good acceptance testing on that customer's end.)
Should I test throwaway code? Possibly. Will pursuing a TDD strategy get your throwaway code out the door faster? It might. Will having solid unit-tested chunks that you can adapt to new constraints reduce the need to throw code away? Perhaps.
Should I test code that's constantly changing? Yes. Just make sure all applicable tests are brought up to date and pass! Constantly changing code can be particularly susceptible to errors, after all, and enabling safe change is another of unit testing's great benefits. Plus, it probably puts a burden on your invariant code to be as robust as possible, to enable this velocity of change. And you know how you can convince yourself whether a piece of code is robust...
Should I test features that are no longer needed? No, you can remove the test, and probably the code as well (testing to ensure you didn't break anything in the process, of course!). Don't leave unit test rot around, especially if the test no longer works or runs, or people in your org will move away from unit tests and you'll lose the benefit. I've seen this happen. It's not pretty.
Should I test code that doesn't get used by my project, even if it was written in the context of my project? Depends on what the deliverable of your project is, and what the priorities of your project are. But are you sure nobody outside of your project will use it? If they won't, and you aren't, perhaps it's just dead code, in which case see above. From my point of view, I wouldn't feel I'd done a complete job with a class if my testing didn't cover all its important functionality, whether the project used all that functionality or not. I like classes that feel complete, but I keep an eye towards not overengineering a bunch of stuff I don't need. If I put something in a class, then, I intend for it to be used, and will therefore want to make sure it works. It's an issue of personal quality and satisfaction to me.
Don't get fixated on counting lines of code. Write as much test code as you need to convince yourself that every key piece of functionality is being thoroughly tested. As an extreme example, the SQLite project has a tests:source-code ratio of more than 600:1. I use the term "extreme" in a good sense here; the ludicrous amount of testing that goes on is possibly the predominant reason that SQLite has taken over the world.
How can you do all those calculations? Ideally you should never be in a situation where you could count the lines of your completed class and then start writting the unit test from scratch. Those 2 types of code (real code and test code) should be developed and evolved together, and the only LOC metric that should really worry you in the end is 0 LOCs for test code.
Relative LOC counts for code and tests are pointless. What matters more is test coverage. What matters most is finding the bugs.
When I'm writing unit tests, I tend to focus my efforts on testing complicated code that is more likely to contain bugs. Simple stuff (e.g. simple getter and setter methods) is unlikely to contain bugs, and can be tested indirectly by higher-level unit tests.
Some Time ago, i had The same question you have posted in mind. I studied a lot of articles, Tutorials, books and so on... Although These resources give me a good starting point, i still was insecure about how To apply efficiently Unit Testing code. After coming across xUnit Test Patterns: Refactoring Test Code and put it in my shelf for about one year (You know, we have a lot of stuffs To study), it gives me what i need To apply efficiently Unit Testing code. With a lot of useful patterns (and advices), you will see how you can become an Unit Testing coder. Topics as
Test strategy patterns
Basic patterns
Fixture setup patterns
Result verification patterns
Test double patterns
Test organization patterns
Database patterns
Value patterns
And so on...
I will show you, for instance, derived value pattern
A derived input is often employed when we need to test a method that takes a complex object as an argument. For example, thorough input validation testing requires we exercise the method with each of the attributes of the object set to one or more possible invalid values. Because The first rejected value could cause Termination of The method, we must verify each bad attribute in a separate call. We can instantiate The invalid object easily by first creating a valid object and then replacing one of its attributes with a invalid value.
A Test organization pattern which is related To your question (Testcase class per feature)
As The number of Test methods grows, we need To decide on which Testcase class To put each Test method... Using a Testcase class per feature gives us a systematic way To break up a large Testcase class into several smaller ones without having To change out Test methods.
But before reading
(source: xunitpatterns.com)
My advice: read carefully
You seem to be concerned that there could be more test-code than the code-under-test.
I think the ratios could we be higher than you say. I would expect any serious test to exercise a wide range of inputs. So your 20 line class might well have 200 lines of test code.
I do not see that as a problem. The interesting thing for me is that writing tests doesn't seem to slow me down. Rather it makes me focus on the code as I write it.
So, yes test everything. Try not to think of testing as a chore.
I am part of a team that have just started adding test code to our existing, and rather old, code base.
I use 'test' here because I feel that it can be very vague as to weather it is a unit test, or a system test, or an integration test, or whatever. The differences between the terms have large grey areas, and don't add a lot of value.
Because we live in the real world, we don't have time to add test code for all of the existing functionality. We still have Dave the test guy, who finds most bugs. Instead, as we develop we write tests. You know how you run your code before you tell your boss that it works? Well, use a unit framework (we use Junit) to do those runs. And just keep them all, rather than deleting them. Whatever you normally do to convince yourself that it works. Do that.
If it is easy to write the code, do it. If not, leave it to Dave until you think of a good way to do automate it, or until you get that spare time between projects where 'they' are trying to decide what to put into the next release.
for java u can use junit
JUnit
JUnit is a simple framework to write repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks.
* Getting Started
* Documentation
* JUnit related sites/projects
* Mailing Lists
* Get Involved
Getting Started
To get started with unit testing and JUnit read the article: JUnit Cookbook.
This article describes basic test writing using JUnit 4.
You find additional samples in the org.junit.samples package:
* SimpleTest.java - some simple test cases
* VectorTest.java - test cases for java.util.Vector
JUnit 4.x only comes with a textual TestRunner. For graphical feedback, most major IDE's support JUnit 4. If necessary, you can run JUnit 4 tests in a JUnit 3 environment by adding the following method to each test class:
public static Test suite() {
return new JUnit4TestAdapter(ThisClass.class);
}
Documentation
JUnit Cookbook
A cookbook for implementing tests with JUnit.
Javadoc
API documentation generated with javadoc.
Frequently asked questions
Some frequently asked questions about using JUnit.
Release notes
Latest JUnit release notes
License
The terms of the common public license used for JUnit.
The following documents still describe JUnit 3.8.
The JUnit 3.8 version of this homepage
Test Infected - Programmers Love Writing Tests
An article demonstrating the development process with JUnit.
JUnit - A cooks tour
JUnit Related Projects/Sites
* junit.org - a site for software developers using JUnit. It provides instructions for how to integrate JUnit with development tools like JBuilder and VisualAge/Java. As well as articles about and extensions to JUnit.
* XProgramming.com - various implementations of the xUnit testing framework architecture.
Mailing Lists
There are three junit mailing lists:
* JUnit announce: junit-announce#lists.sourceforge.net Archives/Subscribe/Unsubscribe
* JUnit users list: junit#yahoogroups.com Archives/Subscribe/Unsubscribe
* JUnit developer list: junit-devel#lists.sourceforge.net Archives/Subscribe/Unsubscribe
Get Involved
JUnit celebrates programmers testing their own software. As a result bugs, patches, and feature requests which include JUnit TestCases have a better chance of being addressed than those without.
JUnit source code is now hosted on GitHub.
One possibility is to reduce the 'test code' to a language that describes your tests, and an interpreter to run the tests. Teams I have been a part of have used this to wonderful ends, allowing us to write significantly more tests than the "lines of code" would have indicated.
This allowed our tests to be written much more quickly and greatly increased the test legibility.
I am going to answer what I believe are the main points of your question. First, how much test-code should you write? Well, Test-Driven Development can be of some help here. I do not use it as strictly as it is proposed in theory, but I find that writing a test first often helps me to understand the problem I want to solve much better. Also, it will usually lead to good test-coverage.
Secondly, which classes should you test? Again, TDD (or more precisely some of the principles behind it) can be of help. If you develop your system top down and write your tests first, you will have tests for the outer class when writing the inner class. These tests should fail if the inner class has bugs.
TDD is also tightly coupled with the idea of Design for Testability.
My answer is not intended to solve all your problems, but to give you some ideas.
I think it's impossible to write a comprehensive guide of exactly what you should and shouldn't unit test. There are simply too many permutations and types of objects, classes, and functions, to be able to cover them all.
I suggest applying personal responsibility to the testing, and determining the answer yourself. It's your code, and you're responsible for it working. If it breaks, you have to pay the consequences of fixing the code, repairing the data, taking responsibility for the lost revenue, and apologizing to the people whose application broke while they were trying to use it. Bottom line - your code should never break. So what do you have to do to ensure this?
Sometimes unit testing can work well to help you test out all of the specific methods in a library. Sometimes unit testing is just busy-work, because you can tell the code is working based on your use of the code during higher-level testing. You're the developer, you're responsible for making sure the code never breaks - what do you think is the best way to achieve that?
If you think unit testing is a waste of time in a specific circumstance - it probably is. If you've tested the code in all of the application use-case scenarios and they all work, the code is probably good.
If anything is happening in the code that you don't understand - even if the end result is acceptable - then you need to do some more testing to make sure there's nothing you don't understand.
To me, this seems like common sense.
Unit testing is mostly for testing your units from aspect of functionality. You can test and see if a specific input come, will we receive the expected value or will we throw the right exception?
Unit tests are very useful. I recommend you to write down these tests. However, not everything is required to be tested. For example, you don't need to test simple getters and setters.
If you want to write your unit tests in Java via Eclipse, please look at "How To Write Java Unit Tests". I hope it helps.
I'm Currently reading two excellent books "Working Effectively with Legacy Code" and "Clean Code".
They are making me think about the way I write and work with code in completely new ways but one theme that is common among them is test driven development and the idea of smothering everything with tests and having tests in place before you make a change or implement a new piece of functionality.
This has led to two questions:
Question 1:
If I am working with legacy code. According to the books I should put tests in place to ensure I'm not breaking anything. Consider that I have a method 500 lines long. I would assume I'll have a set of equivalent testing methods to test that method. When I split this function up, do I create new tests for each new method/class that results?
According to "Clean Code" any test that takes longer than 1/10th of a second is a test that takes too long. Trying to test a 500 long line legacy method that goes to databases and does god knows what else could well take longer than 1/10th of a second. While I understand you need to break dependencies what I'm having trouble with is the initial test creation.
Question 2:
What happens when the code is re-factored so much that structurally it no longer resembles the original code (new parameters added/removed to methods etc). It would follow that the tests will need re-factoring also? In that case you could potentially altering the functionality of the system while the allowing the tests to keep passing? Is re-factoring tests an appropriate thing to do in this circumstance?
While its ok to plod on with assumptions I was wondering whether there are any thoughts/suggestions on such matters from a collective experience.
That's the deal when working with legacy code. Legacy meaning a system with no tests and which is tightly coupled. When adding tests for that code, you are effectively adding integration tests. When you refactor and add the more specific test methods that avoid the network calls, etc those would be your unit tests. You want to keep both, just have then separate, that way most of your unit tests will run fast.
You do that in really small steps. You actually switch continually between tests and code, and you are correct, if you change a signature (small step) related tests need to be updated.
Also check my "update 2" on How can I improve my junit tests. It isn't about legacy code and dealing with the coupling it already has, but on how you go about writing logic + tests where external systems are involved i.e. databases, emails, etc.
The 0.1s unit test run time is fairly silly. There's no reason unit tests shouldn't use a network socket, read a large file or other hefty operations if they have to. Yes it's nice if the tests run quickly so you can get on with the main job of writing the application but it's much nicer to end up with the best result at the end and if that means running a unit test that takes 10s then that's what I'd do.
If you're going to refactor the key is to spend as much time as you need to understand the code you are refactoring. One good way of doing that would be to write a few unit tests for it. As you grasp what certain blocks of code are doing you could refactor it and then it's good practice to write tests for each of your new methods as you go.
Yes, create new tests for new methods.
I'd see the 1/10 of a second as a goal you should strive for. A slower test is still much better than no test.
Try not to change the code and the test at the same time. Always take small steps.
When you've got a lengthy legacy method that does X (and maybe Y and Z because of its size), the real trick is not breaking the app by 'fixing' it. The tests on the legacy app have preconditions and postconditions and so you've got to really know those before you go breaking it up. The tests help to facilitate that. As soon as you break that method into two or more new methods, obviously you need to know the pre/post states for each of those and so tests for those 'keep you honest' and let you sleep better at night.
I don't tend to worry too much about the 1/10th of a second assertion. Rather, the goal when I'm writing unit tests is to cover all my bases. Obviously, if a test takes a long time, it might be because what is being tested is simply way too much code doing way too much.
The bottom line is that you definitely don't want to take what is presumably a working system and 'fix' it to the point that it works sometimes and fails under certain conditions. That's where the tests can help. Each of them expects the world to be in one state at the beginning of the test and a new state at the end. Only you can know if those two states are correct. All the tests can 'pass' and the app can still be wrong.
Anytime the code gets changed, the tests will possibly change and new ones will likely need to be added to address changes made to the production code. Those tests work with the current code - doesn't matter if the parameters needed to change, there are still pre/post conditions that have to be met. It isn't enough, obviously, to just break up the code into smaller chunks. The 'analyst' in you has to be able to understand the system you are building - that's job one.
Working with legacy code can be a real chore depending on the 'mess' you start with. I really find that knowing what you've got and what it is supposed to do (and whether it actually does it at step 0 before you start refactoring it) is key to a successful refactoring of the code. One goal, I think, is that I ought to be able to toss out the old stuff, stick my new code in its place and have it work as advertised (or better). Depending on the language it was written in, the assumptions made by the original author(s) and the ability to encapsulate functionality into containable chunks, it can be a real trick.
Best of luck!
Here's my take on it:
No and yes. First things first is to have a unit test that checks the output of that 500 line method. And then that's only when you begin thinking of splitting it up. Ideally the process will go like this:
Write a test for the original legacy 500-line behemoth
Figure out, marking first with comments, what blocks of code you could extract from that method
Write a test for each block of code. All will fail.
Extract the blocks one by one. Concentrate on getting all the methods go green one at a time.
Rinse and repeat until you've finished the whole thing
After this long process you will realize that it might make sense that some methods be moved elsewhere, or are repetitive and several can be reduced to a single function; this is how you know that you succeeded. Edit tests accordingly.
Go ahead and refactor, but as soon as you need to change signatures make the changes in your test first before you make the change in your actual code. That way you make sure that you're still making the correct assertions given the change in method signature.
Question 1: "When I split this function up, do I create new tests for each new method/class that results?"
As always the real answer is it depends. If it is appropriate, it may be simpler when refactoring some gigantic monolithic methods into smaller methods that handle different component parts to set your new methods to private/protected and leave your existing API intact in order to continue to use your existing unit tests. If you need to test your newly split off methods, sometimes it is advantageous to just mark them as package private so that your unit testing classes can get at them but other classes cannot.
Question 2: "What happens when the code is re-factored so much that structurally it no longer resembles the original code?"
My first piece of advice here is that you need to get a good IDE and have a good knowledge of regular expressions - try to do as much of your refactoring using automated tools as possible. This can help save time if you are cautious enough not to introduce new problems. As you said, you have to change your unit tests - but if you used good OOP principals with the (you did right?), then it shouldn't be so painful.
Overall, it is important to ask yourself with regards to the refactor do the benefits outweigh the costs? Am I just fiddling around with architectures and designs? Am I doing a refactor in order to understand the code and is it really needed? I would consult a coworker who is familiar with the code base for their opinion on the cost/benefits of your current task.
Also remember that the theoretical ideal you read in books needs to be balanced with real world business needs and time schedules.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I'm sure most of you are writing lots of automated tests and that you also have run into some common pitfalls when unit testing.
My question is do you follow any rules of conduct for writing tests in order to avoid problems in the future? To be more specific: What are the properties of good unit tests or how do you write your tests?
Language agnostic suggestions are encouraged.
Let me begin by plugging sources - Pragmatic Unit Testing in Java with JUnit (There's a version with C#-Nunit too.. but I have this one.. its agnostic for the most part. Recommended.)
Good Tests should be A TRIP (The acronymn isn't sticky enough - I have a printout of the cheatsheet in the book that I had to pull out to make sure I got this right..)
Automatic : Invoking of tests as well as checking results for PASS/FAIL should be automatic
Thorough: Coverage; Although bugs tend to cluster around certain regions in the code, ensure that you test all key paths and scenarios.. Use tools if you must to know untested regions
Repeatable: Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params.
Independent: Very important.
Tests should test only one thing at a time. Multiple assertions are okay as long as they are all testing one feature/behavior. When a test fails, it should pinpoint the location of the problem.
Tests should not rely on each other - Isolated. No assumptions about order of test execution. Ensure 'clean slate' before each test by using setup/teardown appropriately
Professional: In the long run you'll have as much test code as production (if not more), therefore follow the same standard of good-design for your test code. Well factored methods-classes with intention-revealing names, No duplication, tests with good names, etc.
Good tests also run Fast. any test that takes over half a second to run.. needs to be worked upon. The longer the test suite takes for a run.. the less frequently it will be run. The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit.
Update 2010-08:
Readable : This can be considered part of Professional - however it can't be stressed enough. An acid test would be to find someone who isn't part of your team and asking him/her to figure out the behavior under test within a couple of minutes. Tests need to be maintained just like production code - so make it easy to read even if it takes more effort. Tests should be symmetric (follow a pattern) and concise (test one behavior at a time). Use a consistent naming convention (e.g. the TestDox style). Avoid cluttering the test with "incidental details".. become a minimalist.
Apart from these, most of the others are guidelines that cut down on low-benefit work: e.g. 'Don't test code that you don't own' (e.g. third-party DLLs). Don't go about testing getters and setters. Keep an eye on cost-to-benefit ratio or defect probability.
Don't write ginormous tests. As the 'unit' in 'unit test' suggests, make each one as atomic and isolated as possible. If you must, create preconditions using mock objects, rather than recreating too much of the typical user environment manually.
Don't test things that obviously work. Avoid testing the classes from a third-party vendor, especially the one supplying the core APIs of the framework you code in. E.g., don't test adding an item to the vendor's Hashtable class.
Consider using a code coverage tool such as NCover to help discover edge cases you have yet to test.
Try writing the test before the implementation. Think of the test as more of a specification that your implementation will adhere to. Cf. also behavior-driven development, a more specific branch of test-driven development.
Be consistent. If you only write tests for some of your code, it's hardly useful. If you work in a team, and some or all of the others don't write tests, it's not very useful either. Convince yourself and everyone else of the importance (and time-saving properties) of testing, or don't bother.
Most of the answers here seem to address unit testing best practices in general (when, where, why and what), rather than actually writing the tests themselves (how). Since the question seemed pretty specific on the "how" part, I thought I'd post this, taken from a "brown bag" presentation that I conducted at my company.
Womp's 5 Laws of Writing Tests:
1. Use long, descriptive test method names.
- Map_DefaultConstructorShouldCreateEmptyGisMap()
- ShouldAlwaysDelegateXMLCorrectlyToTheCustomHandlers()
- Dog_Object_Should_Eat_Homework_Object_When_Hungry()
2. Write your tests in an Arrange/Act/Assert style.
While this organizational strategy
has been around for a while and
called many things, the introduction
of the "AAA" acronym recently has
been a great way to get this across.
Making all your tests consistent with
AAA style makes them easy to read and
maintain.
3. Always provide a failure message with your Asserts.
Assert.That(x == 2 && y == 2, "An incorrect number of begin/end element
processing events was raised by the XElementSerializer");
A simple yet rewarding practice that makes it obvious in your runner application what has failed. If you don't provide a message, you'll usually get something like "Expected true, was false" in your failure output, which makes you have to actually go read the test to find out what's wrong.
4. Comment the reason for the test – what’s the business assumption?
/// A layer cannot be constructed with a null gisLayer, as every function
/// in the Layer class assumes that a valid gisLayer is present.
[Test]
public void ShouldNotAllowConstructionWithANullGisLayer()
{
}
This may seem obvious, but this
practice will protect the integrity
of your tests from people who don't
understand the reason behind the test
in the first place. I've seen many
tests get removed or modified that
were perfectly fine, simply because
the person didn't understand the
assumptions that the test was
verifying.
If the test is trivial or the method
name is sufficiently descriptive, it
can be permissible to leave the
comment off.
5. Every test must always revert the state of any resource it touches
Use mocks where possible to avoid
dealing with real resources.
Cleanup must be done at the test
level. Tests must not have any
reliance on order of execution.
Keep these goals in mind (adapted from the book xUnit Test Patterns by Meszaros)
Tests should reduce risk, not
introduce it.
Tests should be easy to run.
Tests should be easy to maintain as
the system evolves around them
Some things to make this easier:
Tests should only fail because of
one reason.
Tests should only test one thing
Minimize test dependencies (no
dependencies on databases, files, ui
etc.)
Don't forget that you can do intergration testing with your xUnit framework too but keep intergration tests and unit tests separate
Tests should be isolated. One test should not depend on another. Even further, a test should not rely on external systems. In other words, test your code, not the code your code depends on.You can test those interactions as part of your integration or functional tests.
Some properties of great unit tests:
When a test fails, it should be immediately obvious where the problem lies. If you have to use the debugger to track down the problem, then your tests aren't granular enough. Having exactly one assertion per test helps here.
When you refactor, no tests should fail.
Tests should run so fast that you never hesitate to run them.
All tests should pass always; no non-deterministic results.
Unit tests should be well-factored, just like your production code.
#Alotor: If you're suggesting that a library should only have unit tests at its external API, I disagree. I want unit tests for each class, including classes that I don't expose to external callers. (However, if I feel the need to write tests for private methods, then I need to refactor.)
EDIT: There was a comment about duplication caused by "one assertion per test". Specifically, if you have some code to set up a scenario, and then want to make multiple assertions about it, but only have one assertion per test, you might duplication the setup across multiple tests.
I don't take that approach. Instead, I use test fixtures per scenario. Here's a rough example:
[TestFixture]
public class StackTests
{
[TestFixture]
public class EmptyTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
}
[TestMethod]
[ExpectedException (typeof(Exception))]
public void PopFails()
{
_stack.Pop();
}
[TestMethod]
public void IsEmpty()
{
Assert(_stack.IsEmpty());
}
}
[TestFixture]
public class PushedOneTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
_stack.Push(7);
}
// Tests for one item on the stack...
}
}
What you're after is delineation of the behaviours of the class under test.
Verification of expected behaviours.
Verification of error cases.
Coverage of all code paths within the class.
Exercising all member functions within the class.
The basic intent is increase your confidence in the behaviour of the class.
This is especially useful when looking at refactoring your code. Martin Fowler has an interesting article regarding testing over at his web site.
HTH.
cheers,
Rob
Test should originally fail. Then you should write the code that makes them pass, otherwise you run the risk of writing a test that is bugged and always passes.
I like the Right BICEP acronym from the aforementioned Pragmatic Unit Testing book:
Right: Are the results right?
B: Are all the boundary conditions correct?
I: Can we check inverse relationships?
C: Can we cross-check results using other means?
E: Can we force error conditions to happen?
P: Are performance characteristics within bounds?
Personally I feel that you can get pretty far by checking that you get the right results (1+1 should return 2 in a addition function), trying out all the boundary conditions you can think of (such as using two numbers of which the sum is greater than the integer max value in the add function) and forcing error conditions such as network failures.
Good tests need to be maintainable.
I haven't quite figured out how to do this for complex environments.
All the textbooks start to come unglued as your code base starts reaching
into the hundreds of 1000's or millions of lines of code.
Team interactions explode
number of test cases explode
interactions between components explodes.
time to build all the unittests becomes a significant part of the build time
an API change can ripple to hundreds of test cases. Even though the production code change was easy.
the number of events required to sequence processes into the right state increases which in turn increases test execution time.
Good architecture can control some of interaction explosion, but inevitably as
systems become more complex the automated testing system grows with it.
This is where you start having to deal with trade-offs:
only test external API otherwise refactoring internals results in significant test case rework.
setup and teardown of each test gets more complicated as an encapsulated subsystem retains more state.
nightly compilation and automated test execution grows to hours.
increased compilation and execution times means designers don't or won't run all the tests
to reduce test execution times you consider sequencing tests to take reduce set up and teardown
You also need to decide:
where do you store test cases in your code base?
how do you document your test cases?
can test fixtures be re-used to save test case maintenance?
what happens when a nightly test case execution fails? Who does the triage?
How do you maintain the mock objects? If you have 20 modules all using their own flavor of a mock logging API, changing the API ripples quickly. Not only do the test cases change but the 20 mock objects change. Those 20 modules were written over several years by many different teams. Its a classic re-use problem.
individuals and their teams understand the value of automated tests they just don't like how the other team is doing it. :-)
I could go on forever, but my point is that:
Tests need to be maintainable.
I covered these principles a while back in This MSDN Magazine article which I think is important for any developer to read.
The way I define "good" unit tests, is if they posses the following three properties:
They are readable (naming, asserts, variables, length, complexity..)
They are Maintainable (no logic, not over specified, state-based, refactored..)
They are trust-worthy (test the right thing, isolated, not integration tests..)
Unit Testing just tests the external API of your Unit, you shouldn't test internal behaviour.
Each test of a TestCase should test one (and only one) method inside this API.
Aditional Test Cases should be included for failure cases.
Test the coverage of your tests: Once a unit it's tested, the 100% of the lines inside this unit should had been executed.
Jay Fields has a lot of good advices about writing unit tests and there is a post where he summarize the most important advices. There you will read that you should critically think about your context and judge if the advice is worth to you. You get a ton of amazing answers here, but is up to you decide which is best for your context. Try them and just refactoring if it smells bad to you.
Kind Regards
Never assume that a trivial 2 line method will work. Writing a quick unit test is the only way to prevent the missing null test, misplaced minus sign and/or subtle scoping error from biting you, inevitably when you have even less time to deal with it than now.
I second the "A TRIP" answer, except that tests SHOULD rely on each other!!!
Why?
DRY - Dont Repeat Yourself - applies to testing as well! Test dependencies can help to 1) save setup time, 2) save fixture resources, and 3) pinpoint to failures. Of course, only given that your testing framework supports first-class dependencies. Otherwise, I admit, they are bad.
Follow up http://www.iam.unibe.ch/~scg/Research/JExample/
Often unit tests are based on mock object or mock data.
I like to write three kind of unit tests:
"transient" unit tests: they create their own mock objects/data and test their function with it, but destroy everything and leave no trace (like no data in a test database)
"persistent" unit test: they test functions within your code creating objects/data that will be needed by more advanced function later on for their own unit test (avoiding for those advanced function to recreate every time their own set of mock objects/data)
"persistent-based" unit tests: unit tests using mock objects/data that are already there (because created in another unit test session) by the persistent unit tests.
The point is to avoid to replay everything in order to be able to test every functions.
I run the third kind very often because all mock objects/data are already there.
I run the second kind whenever my model change.
I run the first one to check the very basic functions once in a while, to check to basic regressions.
Think about the 2 types of testing and treat them differently - functional testing and performance testing.
Use different inputs and metrics for each. You may need to use different software for each type of test.
I use a consistent test naming convention described by Roy Osherove's Unit Test Naming standards Each method in a given test case class has the following naming style MethodUnderTest_Scenario_ExpectedResult.
The first test name section is the name of the method in the system under test.
Next is the specific scenario that is being tested.
Finally is the results of that scenario.
Each section uses Upper Camel Case and is delimited by a under score.
I have found this useful when I run the test the test are grouped by the name of the method under test. And have a convention allows other developers to understand the test intent.
I also append parameters to the Method name if the method under test have been overloaded.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I saw many questions asking 'how' to unit test in a specific language, but no question asking 'what', 'why', and 'when'.
What is it?
What does it do for me?
Why should I use it?
When should I use it (also when not)?
What are some common pitfalls and misconceptions
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Note that if your test code writes to a file, opens a database connection or does something over the network, it's more appropriately categorized as an integration test. Integration tests are a good thing, but should not be confused with unit tests. Unit test code should be short, sweet and quick to execute.
Another way to look at unit testing is that you write the tests first. This is known as Test-Driven Development (TDD for short). TDD brings additional advantages:
You don't write speculative "I might need this in the future" code -- just enough to make the tests pass
The code you've written is always covered by tests
By writing the test first, you're forced into thinking about how you want to call the code, which usually improves the design of the code in the long run.
If you're not doing unit testing now, I recommend you get started on it. Get a good book, practically any xUnit-book will do because the concepts are very much transferable between them.
Sometimes writing unit tests can be painful. When it gets that way, try to find someone to help you, and resist the temptation to "just write the damn code". Unit testing is a lot like washing the dishes. It's not always pleasant, but it keeps your metaphorical kitchen clean, and you really want it to be clean. :)
Edit: One misconception comes to mind, although I'm not sure if it's so common. I've heard a project manager say that unit tests made the team write all the code twice. If it looks and feels that way, well, you're doing it wrong. Not only does writing the tests usually speed up development, but it also gives you a convenient "now I'm done" indicator that you wouldn't have otherwise.
I don't disagree with Dan (although a better choice may just be not to answer)...but...
Unit testing is the process of writing code to test the behavior and functionality of your system.
Obviously tests improve the quality of your code, but that's just a superficial benefit of unit testing. The real benefits are to:
Make it easier to change the technical implementation while making sure you don't change the behavior (refactoring). Properly unit tested code can be aggressively refactored/cleaned up with little chance of breaking anything without noticing it.
Give developers confidence when adding behavior or making fixes.
Document your code
Indicate areas of your code that are tightly coupled. It's hard to unit test code that's tightly coupled
Provide a means to use your API and look for difficulties early on
Indicates methods and classes that aren't very cohesive
You should unit test because its in your interest to deliver a maintainable and quality product to your client.
I'd suggest you use it for any system, or part of a system, which models real-world behavior. In other words, it's particularly well suited for enterprise development. I would not use it for throw-away/utility programs. I would not use it for parts of a system that are problematic to test (UI is a common example, but that isn't always the case)
The greatest pitfall is that developers test too large a unit, or they consider a method a unit. This is particularly true if you don't understand Inversion of Control - in which case your unit tests will always turn into end-to-end integration testing. Unit test should test individual behaviors - and most methods have many behaviors.
The greatest misconception is that programmers shouldn't test. Only bad or lazy programmers believe that. Should the guy building your roof not test it? Should the doctor replacing a heart valve not test the new valve? Only a programmer can test that his code does what he intended it to do (QA can test edge cases - how code behaves when it's told to do things the programmer didn't intend, and the client can do acceptance test - does the code do what what the client paid for it to do)
The main difference of unit testing, as opposed to "just opening a new project and test this specific code" is that it's automated, thus repeatable.
If you test your code manually, it may convince you that the code is working perfectly - in its current state. But what about a week later, when you made a slight modification in it? Are you willing to retest it again by hand whenever anything changes in your code? Most probably not :-(
But if you can run your tests anytime, with a single click, exactly the same way, within a few seconds, then they will show you immediately whenever something is broken. And if you also integrate the unit tests into your automated build process, they will alert you to bugs even in cases where a seemingly completely unrelated change broke something in a distant part of the codebase - when it would not even occur to you that there is a need to retest that particular functionality.
This is the main advantage of unit tests over hand testing. But wait, there is more:
unit tests shorten the development feedback loop dramatically: with a separate testing department it may take weeks for you to know that there is a bug in your code, by which time you have already forgotten much of the context, thus it may take you hours to find and fix the bug; OTOH with unit tests, the feedback cycle is measured in seconds, and the bug fix process is typically along the lines of an "oh sh*t, I forgot to check for that condition here" :-)
unit tests effectively document (your understanding of) the behaviour of your code
unit testing forces you to reevaluate your design choices, which results in simpler, cleaner design
Unit testing frameworks, in turn, make it easy for you to write and run your tests.
I was never taught unit testing at university, and it took me a while to "get" it. I read about it, went "ah, right, automated testing, that could be cool I guess", and then I forgot about it.
It took quite a bit longer before I really figured out the point: Let's say you're working on a large system and you write a small module. It compiles, you put it through its paces, it works great, you move on to the next task. Nine months down the line and two versions later someone else makes a change to some seemingly unrelated part of the program, and it breaks the module. Worse, they test their changes, and their code works, but they don't test your module; hell, they may not even know your module exists.
And now you've got a problem: broken code is in the trunk and nobody even knows. The best case is an internal tester finds it before you ship, but fixing code that late in the game is expensive. And if no internal tester finds it...well, that can get very expensive indeed.
The solution is unit tests. They'll catch problems when you write code - which is fine - but you could have done that by hand. The real payoff is that they'll catch problems nine months down the line when you're now working on a completely different project, but a summer intern thinks it'll look tidier if those parameters were in alphabetical order - and then the unit test you wrote way back fails, and someone throws things at the intern until he changes the parameter order back. That's the "why" of unit tests. :-)
Chipping in on the philosophical pros of unit testing and TDD here are a few of they key "lightbulb" observations which struck me on my tentative first steps on the road to TDD enlightenment (none original or necessarily news)...
TDD does NOT mean writing twice the amount of code. Test code is typically fairly quick and painless to write and is a key part of your design process and critically.
TDD helps you to realize when to stop coding! Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing.
The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of BOTH being bad / buggy being fairly low. Often its the test that needs fixing but that's still a good outcome.
TDD helps with coding constipation. You know that feeling that you have so much to do you barely know where to start? It's Friday afternoon, if you just procrastinate for a couple more hours... TDD allows you to flesh out very quickly what you think you need to do, and gets your coding moving quickly. Also, like lab rats, I think we all respond to that big green light and work harder to see it again!
In a similar vein, these designer types can SEE what they're working on. They can wander off for a juice / cigarette / iphone break and return to a monitor that immediately gives them a visual cue as to where they got to. TDD gives us something similar. It's easier to see where we got to when life intervenes...
I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interprete this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete.
TDD helps in all kinds of surprising ways down the line. Good unit tests can help document what something is supposed to do, they can help you migrate code from one project to another and give you an unwarranted feeling of superiority over your non-testing colleagues :)
This presentation is an excellent introduction to all the yummy goodness testing entails.
I would like to recommend the xUnit Testing Patterns book by Gerard Meszaros. It's large but is a great resource on unit testing. Here is a link to his web site where he discusses the basics of unit testing. http://xunitpatterns.com/XUnitBasics.html
I use unit tests to save time.
When building business logic (or data access) testing functionality can often involve typing stuff into a lot of screens that may or may not be finished yet. Automating these tests saves time.
For me unit tests are a kind of modularised test harness. There is usually at least one test per public function. I write additional tests to cover various behaviours.
All the special cases that you thought of when developing the code can be recorded in the code in the unit tests. The unit tests also become a source of examples on how to use the code.
It is a lot faster for me to discover that my new code breaks something in my unit tests then to check in the code and have some front-end developer find a problem.
For data access testing I try to write tests that either have no change or clean up after themselves.
Unit tests aren’t going to be able to solve all the testing requirements. They will be able to save development time and test core parts of the application.
This is my take on it. I would say unit testing is the practice of writing software tests to verify that your real software does what it is meant to. This started with jUnit in the Java world and has become a best practice in PHP as well with SimpleTest and phpUnit. It's a core practice of Extreme Programming and helps you to be sure that your software still works as intended after editing. If you have sufficient test coverage, you can do major refactoring, bug fixing or add features rapidly with much less fear of introducing other problems.
It's most effective when all unit tests can be run automatically.
Unit testing is generally associated with OO development. The basic idea is to create a script which sets up the environment for your code and then exercises it; you write assertions, specify the intended output that you should receive and then execute your test script using a framework such as those mentioned above.
The framework will run all the tests against your code and then report back success or failure of each test. phpUnit is run from the Linux command line by default, though there are HTTP interfaces available for it. SimpleTest is web-based by nature and is much easier to get up and running, IMO. In combination with xDebug, phpUnit can give you automated statistics for code coverage which some people find very useful.
Some teams write hooks from their subversion repository so that unit tests are run automatically whenever you commit changes.
It's good practice to keep your unit tests in the same repository as your application.
LibrarIES like NUnit, xUnit or JUnit are just mandatory if you want to develop your projects using the TDD approach popularized by Kent Beck:
You can read Introduction to Test Driven Development (TDD) or Kent Beck's book Test Driven Development: By Example.
Then, if you want to be sure your tests cover a "good" part of your code, you can use software like NCover, JCover, PartCover or whatever. They'll tell you the coverage percentage of your code. Depending on how much you're adept at TDD, you'll know if you've practiced it well enough :)
Unit-testing is the testing of a unit of code (e.g. a single function) without the need for the infrastructure that that unit of code relies on. i.e. test it in isolation.
If, for example, the function that you're testing connects to a database and does an update, in a unit test you might not want to do that update. You would if it were an integration test but in this case it's not.
So a unit test would exercise the functionality enclosed in the "function" you're testing without side effects of the database update.
Say your function retrieved some numbers from a database and then performed a standard deviation calculation. What are you trying to test here? That the standard deviation is calculated correctly or that the data is returned from the database?
In a unit test you just want to test that the standard deviation is calculated correctly. In an integration test you want to test the standard deviation calculation and the database retrieval.
Unit testing is about writing code that tests your application code.
The Unit part of the name is about the intention to test small units of code (one method for example) at a time.
xUnit is there to help with this testing - they are frameworks that assist with this. Part of that is automated test runners that tell you what test fail and which ones pass.
They also have facilities to setup common code that you need in each test before hand and tear it down when all tests have finished.
You can have a test to check that an expected exception has been thrown, without having to write the whole try catch block yourself.
I think the point that you don't understand is that unit testing frameworks like NUnit (and the like) will help you in automating small to medium-sized tests. Usually you can run the tests in a GUI (that's the case with NUnit, for instance) by simply clicking a button and then - hopefully - see the progress bar stay green. If it turns red, the framework shows you which test failed and what exactly went wrong. In a normal unit test, you often use assertions, e.g. Assert.AreEqual(expectedValue, actualValue, "some description") - so if the two values are unequal you will see an error saying "some description: expected <expectedValue> but was <actualValue>".
So as a conclusion unit testing will make testing faster and a lot more comfortable for developers. You can run all the unit tests before committing new code so that you don't break the build process of other developers on the same project.
Use Testivus. All you need to know is right there :)
Unit testing is a practice to make sure that the function or module which you are going to implement is going to behave as expected (requirements) and also to make sure how it behaves in scenarios like boundary conditions, and invalid input.
xUnit, NUnit, mbUnit, etc. are tools which help you in writing the tests.
Test Driven Development has sort of taken over the term Unit Test. As an old timer I will mention the more generic definition of it.
Unit Test also means testing a single component in a larger system. This single component could be a dll, exe, class library, etc. It could even be a single system in a multi-system application. So ultimately Unit Test ends up being the testing of whatever you want to call a single piece of a larger system.
You would then move up to integrated or system testing by testing how all the components work together.
First of all, whether speaking about Unit testing or any other kinds of automated testing (Integration, Load, UI testing etc.), the key difference from what you suggest is that it is automated, repeatable and it doesn't require any human resources to be consumed (= nobody has to perform the tests, they usually run at a press of a button).
I went to a presentation on unit testing at FoxForward 2007 and was told never to unit test anything that works with data. After all, if you test on live data, the results are unpredictable, and if you don't test on live data, you're not actually testing the code you wrote. Unfortunately, that's most of the coding I do these days. :-)
I did take a shot at TDD recently when I was writing a routine to save and restore settings. First, I verified that I could create the storage object. Then, that it had the method I needed to call. Then, that I could call it. Then, that I could pass it parameters. Then, that I could pass it specific parameters. And so on, until I was finally verifying that it would save the specified setting, allow me to change it, and then restore it, for several different syntaxes.
I didn't get to the end, because I needed-the-routine-now-dammit, but it was a good exercise.
What do you do if you are given a pile of crap and seem like you are stuck in a perpetual state of cleanup that you know with the addition of any new feature or code can break the current set because the current software is like a house of cards?
How can we do unit testing then?
You start small. The project I just got into had no unit testing until a few months ago. When coverage was that low, we would simply pick a file that had no coverage and click "add tests".
Right now we're up to over 40%, and we've managed to pick off most of the low-hanging fruit.
(The best part is that even at this low level of coverage, we've already run into many instances of the code doing the wrong thing, and the testing caught it. That's a huge motivator to push people to add more testing.)
This answers why you should be doing unit testing.
The 3 videos below cover unit testing in javascript but the general principles apply across most languages.
Unit Testing: Minutes Now Will Save Hours Later - Eric Mann - https://www.youtube.com/watch?v=_UmmaPe8Bzc
JS Unit Testing (very good) - https://www.youtube.com/watch?v=-IYqgx8JxlU
Writing Testable JavaScript - https://www.youtube.com/watch?v=OzjogCFO4Zo
Now I'm just learning about the subject so I may not be 100% correct and there's more to it than what I'm describing here but my basic understanding of unit testing is that you write some test code (which is kept separate from your main code) that calls a function in your main code with input (arguments) that the function requires and the code then checks if it gets back a valid return value. If it does get back a valid value the unit testing framework that you're using to run the tests shows a green light (all good) if the value is invalid you get a red light and you then can fix the problem straight away before you release the new code to production, without testing you may actually not have caught the error.
So you write tests for you current code and create the code so that it passes the test. Months later you or someone else need to modify the function in your main code, because earlier you had already written test code for that function you now run again and the test may fail because the coder introduced a logic error in the function or return something completely different than what that function is supposed to return. Again without the test in place that error might be hard to track down as it can possibly affect other code as well and will go unnoticed.
Also the fact that you have a computer program that runs through your code and tests it instead of you manually doing it in the browser page by page saves time (unit testing for javascript). Let's say that you modify a function that is used by some script on a web page and it works all well and good for its new intended purpose. But, let's also say for arguments sake that there is another function you have somewhere else in your code that depends on that newly modified function for it to operate properly. This dependent function may now stop working because of the changes that you've made to the first function, however without tests in place that are run automatically by your computer you will not notice that there's a problem with that function until it is actually executed and you'll have to manually navigate to a web page that includes the script which executes the dependent function, only then you notice that there's a bug because of the change that you made to the first function.
To reiterate, having tests that are run while developing your application will catch these kinds of problems as you're coding. Not having the tests in place you'd have to manually go through your whole application and even then it can be hard to spot the bug, naively you send it out into production and after a while a kind user sends you a bug report (which won't be as good as your error messages in a testing framework).
It's quite confusing when you first hear of the subject and you think to yourself, am I not already testing my code? And the code that you've written is working like it is supposed to already, "why do I need another framework?"... Yes you are already testing your code but a computer is better at doing it. You just have to write good enough tests for a function/unit of code once and the rest is taken care of for you by the mighty cpu instead of you having to manually check that all of your code is still working when you make a change to your code.
Also, you don't have to unit test your code if you don't want to but it pays off as your project/code base starts to grow larger as the chances of introducing bugs increases.
Unit-testing and TDD in general enables you to have shorter feedback cycles about the software you are writing. Instead of having a large test phase at the very end of the implementation, you incrementally test everything you write. This increases code quality very much, as you immediately see, where you might have bugs.