Howto overcome Unit Test Regression Problems...? - unit-testing

I was looking for some kind of a solution for software development teams which spend too much time handling unit test regression problems (about 30% of the time in my case!!!), i.e., dealing with unit tests which fails on a day to day basis.
Following is one solution I'm familiar with, which analyzes which of the latest code changes caused a certain unit test to fail:
Unit Test Regression Analysis Tool
I wanted to know if anyone knows similar tools so I can benchmark them.
As well, if anyone can recommand another approach to handle this annoying problem.
Thanks at Advanced

You have our sympathy. It sounds like you have brittle test syndrome. Ideally, a single change to a unit test should only break a single test-- and it should be a real problem. Like I said, "ideally". But this type of behavior common and treatable.
I would recommend spending some time with the team doing some root cause analysis of why all these tests are breaking. Yep, there are some fancy tools that keep track of which tests fail most often, and which ones fail together. Some continuous integration servers have this built in. That's great. But I suspect if you just ask each other, you'll know. I've been though this and the team always just knows from their experience.
Anywho, a few other things I've seen that cause this:
Unit tests generally shouldn't depend on more than the class and method they are testing. Look for dependencies that have crept in. Make sure you're using dependency injection to make testing easier.
Are these truly unique tests? Or are they testing the same thing over and over? If they are always going to fail together, why not just remove all but one?
Many people favor integration over unit tests, since they get more coverage for their buck. But with these, a single change can break lots of tests. Maybe you're writing integration tests?
Perhaps they are all running through some common set-up code for lots of tests, causing them to break in unison. Maybe this can be mocked out to isolate behaviors.

Test often, commit often.
If you don't do that already, I suggest to use a Continuous Integration tool, and ask/require the developers to run the automated tests before committing. At least a subset of the tests. If running all tests takes too long, then use a CI tools that spawns a build (which includes running all automated tests) for each commit, so you can easily see which commit broke the build.
If the automated tests are too fragile, maybe they don't test the functionality, but the implementation details? Sometimes testing the implementation details is a good idea, but it can be problematic.

Regarding running a subset of most probable test to fail - since it's usually fails due to other team members (at least in my case), I need to ask others to run my test - which might be 'politically problematic' in some of the development environments ;). Any other suggestions will be appriciated. Thanks a lot – SpeeDev Sep 30 '10 at 23:18
If you have to "ask others" to run your test then that suggests a serious problem with your test infrastructure. All tests (regardless of who wrote them) should be run automatically. The responsibility for fixing a failing test should lie with the person who committed the change not the test author.

Related

Unit-tests for a working codebase with limited time: How?

I have a few medium-sized Rails apps that I work on routinely, and only one of them has any unit tests at all. But I have seen the light and I want to change all that, except... I don't have the time to go in and starting writing tests class by class or something like that.
How do you start writing unit tests on an existing -- and working -- codebase with limited time? For example, since any approach would have to be incremental, how would you order your unit-test writing? Start with superficial tests, then move on to more coverage, or cover just a few classes... etc.
Note: I am asking this question thinking about Rails, but really I'm interested in how it applies to any language.
Edit: Note, this question is not the same as this other one. The other one asks how hard this is, and was the result worth it. I'm asking about how to add unit tests.
Here is how I usually start adding unit tests to a project that didn't start out that way: Wait for someone to file a bug, then write a unit test that reproduces the bug. Then fix the unit test. This not only starts building unit tests, but now no one can accuse you of a regression for the given bug.
My answer isn't specific to Ruby on Rails. Next time you need to touch the codebase, to fix a bug or add a new feature, write tests for the parts of the code you're touching. If you can spare a couple of minutes, add some related tests. If you find that you need to refactor, go ahead and write the tests to support that. Over time you'll build up the test coverage, and you'll find you always have tests for the areas you need them in (because those are the tests you're writing).
I had a very similar experience a few years ago, and stumbled upon this book:
Working Effectively With Legacy Code by Michael C. Feathers
It has an incredibly complete set of techniques for starting with an existing codebase that has no unit test coverage, and gradually getting it under test. If I could recommend only one book on TDD, it would be this one.
Hopefully this helps... best of luck!
Tyler
One of the problems I faced when I started writing real unit tests (with mocks and etc) is that I had to go back and change the code to support the injection of the mock objects mostly through the extraction of interfaces. I believe it will be pretty hard for you to do that on an existing system without some sort of refactoring.
Increasing code coverage is an excellent way to get a new recruit familiar with a codebase.
Other than i think you just need to find time, there is no magic solution!
In the long run, unit testing should make getting (working!) functionality to the users faster.
If it's not accomplishing that, it's not worth the time.
With limited time? Facing deadlines?
Forget about unit tests!
Cowboy coding 4 the win!
Hack features together until it's not too late and client haven't sued Your company.
P.s. For your own safety - do not forget to inform about situation your PM.
Strange that down votes avalanche haven't started yet. Maybe it's not so bad and telling NOT to write unit tests ain't such a taboo at all.
I'm in similar situation (assuming Your time is really limited). What i do - i don't think about unit tests most of the time. But for some cases - it's actual easier to do TDD than to continue hacking (emm... duct taping? :D ) everything together (usually when testable unit has high complexity or is hard to test manually), then i just switch my mind and code normally. In short term - i will be able to understand what i wrote month ago and that won't make much trouble. Problems will arise when project will slip into maintenance phase. But it's still way much better than telling client that you worked on tests and got nothing new.
When you need to start unit testing in existing project - start with your own functionality. Create necessary test infrastructure (if time allows - continuous integration too) and don't forget to teach unit testing to your co-workers.
Worst thing you (or PM) can do - to force writing unit tests to someone who does not know how to do that. That's just wasting time. Lead by example. Gradually.
It did start after all! ^_^

Is there such a thing as too much unit testing?

I tried looking through all the pages about unit tests and could not find this question. If this is a duplicate, please let me know and I will delete it.
I was recently tasked to help implement unit testing at my company. I realized that I could unit test all the Oracle PL/SQL code, Java code, HTML, JavaScript, XML, XSLT, and more.
Is there such a thing as too much unit testing? Should I write unit tests for everything above or is that overkill?
This depends on the project and its tolerance for failure. There is no single answer. If you can risk a bug, then don't test everything.
When you have tons of tests, it is also likely you will have bugs in your tests. Adding to your headaches.
test what needs testing, leave what does not which often leaves the fairly simple stuff.
Is there such as thing as too much unit testing?
Sure. The problem is finding the right balance between enough unit testing to cover the important areas of functionality, and focusing effort on creating new value for your customers in the terms of system functionality.
Unit testing code vs. leaving code uncovered by tests both have a cost.
The cost of excluding code from unit testing may include (but aren't limited to):
Increased development time due to fixing issues you can't automatically test
Fixing problems discovered during QA testing
Fixing problems discovered when the code reaches your customers
Loss of revenue due to customer dissatisfaction with defects that made it through testing
The costs of writing a unit test include (but aren't limited to):
Writing the original unit test
Maintaining the unit test as your system evolves
Refining the unit test to cover more conditions as you discover them in testing or production
Refactoring unit tests as the underlying code under test is refactored
Lost revenue when it takes longer for you application to reach enter the market
The opportunity cost of implementing features that could drive sales
You have to make your best judgement about what these costs are likely to be, and what your tolerance is for absorbing such costs.
In general, unit testing costs are mostly absorbed during the development phase of a system - and somewhat during it's maintenance. If you spend too much time writing unit tests you may miss a valuable window of opportunity to get your product to market. This could cost you sales or even long-term revenue if you operate in a competitive industry.
The cost of defects is absorbed during the entire lifetime of your system in production - up until the point the defect is corrected. And potentially, even beyond that, if they defect is significant enough that it affects your company's reputation or market position.
Kent Beck of JUnit and JUnitMax fame answered a similar question of mine.
The question has slightly different semantics but the answer is definitely relevant
The purpose of Unit tests is generally to make it possibly to refector or change with greater assurance that you did not break anything. If a change is scary because you do not know if you will break anything, you probably need to add a test. If a change is tedious because it will break a lot of tests, you probably have too many test (or too fragile a test).
The most obvious case is the UI. What makes a UI look good is something that is hard to test, and using a master example tends to be fragile. So the layer of the UI involving the look of something tends not to be tested.
The other times it might not be worth it is if the test is very hard to write and the safety it gives is minimal.
For HTML I tended to check that the data I wanted was there (using XPath queries), but did not test the entire HTML. Similarly for XSLT and XML. In JavaScript, when I could I tested libraries but left the main page alone (except that I moved most code into libraries). If the JavaScript is particularly complicated I would test more. For databases I would look into testing stored procedures and possibly views; the rest is more declarative.
However, in your case first start with the stuff that worries you the most or is about to change, especially if it is not too difficult to test. Check the book Working Effectively with Legacy Code for more help.
Yes, there is such a thing as too much unit testing. One example would be unit testing in a whitebox manner, such that you're effectively testing the specific implementation; such testing would effectively slow down progress and refactoring by requiring compliant code to need new unit tests (because the tests were dependent upon specific implementation details).
I suggest that in some situations you might want automated testing, but no 'unit' testing at all (Should one test internal implementation, or only test public behaviour?), and that any time spent writing unit tests would be better spent writing system tests.
While more tests is usually better (I have yet to be on a project that actually had too many tests), there's a point at which the ROI bottoms out, and you should move on. I'm assuming you have finite time to work on this project, by the way. ;)
Adding unit tests has some amount of diminishing returns -- after a certain point (Code Complete has some theories), you're better off spending your finite amount of time on something else. That may be more testing/quality activities like refactoring and code review, usability testing with real human users, etc., or it could be spent on other things like new features, or user experience polish.
As EJD said, you can't verify the absence of errors.
This means there are always more tests you could write. Any of these could be useful.
What you need to understand is that unit-testing (and other types of automated testing you use for development purposes) can help with development, but should never be viewed as a replacement for formal QA.
Some tests are much more valuable than others.
There are parts of your code that change a lot more frequently, are more prone to break, etc. These are the most economical tests.
You need to balance out the amount of testing you agree to take on as a developer. You can easily overburden yourself with unmaintainable tests. IMO, unmaintainable tests are worse than no tests because they:
Turn others off from trying to maintain a test suite or write new tests.
Detract from you adding new, meaningful functionality. If automated testing is not a net-positive result, you should ditch it like other engineering practices.
What should I test?
Test the "Happy Path" - this ensures that you get interactions right, and that things are wired together properly. But you don't adequately test a bridge by driving down it on a sunny day with no traffic.
Pragmatic Unit Testing recommends you use Right-BICEP to figure out what to test. "Right" for the happy path, then Boundary conditions, check any Inverse relationships, use another method (if it exists) to Cross-check results, force Error conditions, and finally take into account any Performance considerations that should be verified. I'd say if you are thinking about tests to write in this way, you're most likely figure out how to get to an adequate level of testing. You'll be able to figure out which ones are more useful and when. See the book for much more info.
Test at the right level
As others have mentioned, unit tests are not the only way to write automated tests. Other types of frameworks may be built off of unit tests, but provide mechanisms to do package level, system or integration tests. The best bang for the buck may be at a higher level, and just using unit testing to verify a single component's happy path.
Don't be discouraged
I'm painting a more grim picture here than I expect most developers will find in reality. The bottom line is that you make a commitment to learn how to write tests and write them well. But don't let fear of the unknown scare you into not writing any tests. Unlike production code, tests can be ditched and rewritten without many adverse effects.
Unit test any code that you think might change.
You should only really write unit tests for any code which you have written yourself. There is no need to test the functionality inherently provided to you.
For example, If you've been given a library with an add function, you should not be testing that add(1,2) returns 3. Now if you've WRITTEN that code, then yes, you should be testing it.
Of course, whoever wrote the library may not have tested it and it may not work... in which case you should write it yourself or get a separate one with the same functionality.
Well, you certainly shouldn't unit test everything, but at least the complicated tasks or those that will most likely contain errors/cases you haven't thought of.
The point of unit testing is being able to run a quick set of tests to verify that your code is correct. This lets you verify that your code matches your specification and also lets you make changes and ensure that they don't break anything.
Use your judgement. You don't want to spend all of your time writing unit tests or you won't have any time to write actual code to test.
When you've unit tested your unit tests, thinking you have then provided 200% coverage.
There is a development approach called test-driven development which essentially says that there is no such thing as too much (non-redundant) unit testing. That approach, however, is not a testing approach, but rather a design approach which relies on working code and a more or less complete unit test suite with tests which drive every single decision made about the codebase.
In a non-TDD situation, automated tests should exercise every line of code you write (in particular Branch coverage is good), but even then there are exceptions - you shouldn't be testing vendor-supplied platform or framework code unless you know for certain that there are bugs which will affect you in that platform. You shouldn't be testing thin wrappers (or, equally, if you need to test it, the wrapper is not thin). You should be testing all core business logic, and it is certainly helpful to have some set of tests that exercise your database at some elemental level, although those tests will never work in the common situation where unit tests are run every time you compile.
Specifically with regard to database testing is intrinsically slow, and depending on how much logic is held in your database, quite difficult to get right. Typically things like dbs, HTML/XML documents & templating, and other document-ish aspects of a program are verified moreso than tested. The difference is usually that testing tries to exercise execution paths whereas verification tries to verify inputs and outputs directly.
To learn more about this I would suggest reading up on "Code Coverage". There is a lot of material available if you're curious about this.

How to make sure developers are unit testing their code

How can you make sure that all developers on your team are unit testing their code? Code coverage metrics are the only way I can think of to objectively measure this. Is there another way?
(Of course if you're really following TDD then this shouldn't be an issue. But let's just suppose you've got some developers that don't quite "get" TDD yet.)
This is probably a social problem rather than a technological one. Firstly, do you want unit tests that result in 100% code coverage, or can you settle for less, and trust your developers to put in unit tests where they really matter, and where they make sense. You could probably get some kind of system in place that would do code coverage tests to ensure that unit tests cover a certain percentage of code. But then there would still be ways to game the system. And it still wouldn't result in code that was bug free. Due to things like the halting problem, it's impossible to cover ever path in the code.
Run test coverage reports automatically during your build process. We do this with CruiseControl. Otherwise, you have to actually inspect what is getting tested in your test results reports.
Code coverage tools are almost certainly superior to any ad hoc method you could come up with. That's why they exist.
Even developers who get TDD are far from immune to gaps in coverage. Often, code that fixes a bug breaks a lateral test or creates a branch that the original developer did not anticipate and the maintenance developer didn't realize was new.
A good way to get tests written is to increase accountability. If a developer has to explain to someone else exactly why they didn't write unit tests, they're more likely to do so. Most companies I've worked at have required that any proposed commit to a trunk be reviewed by another developer before the commit, and that the name of the reviewer be included in the commit comments. In this environment, you can tell your team that they should not allow code to "pass" peer code review unless unit tests are in place.
Now you have a chain of responsibility. If a developer commits code without naming the reviewer, you can ask them who reviewed the code (and, as I learned the hard way, having to say "nobody" to your boss when asked this question is no fun!). If you do become aware of code being committed without unit tests, you can ask both the developer and the code reviewer why unit tests were not included. The possibility of being asked this question encourages code reviewers to insist on unit tests.
One more step you can take is to install a commit hook in your version control system that e-mails the entire team when a commit is made, along with the files and even code that made up the commit. This provides a high level of transparency, and further encourages developers to "follow the rules." Of course, this only works if it scales to the number of commits your team does per day.
This is more of a psychological solution than a technical solution, but it's worked well for me when managing software teams. It's also a bit gentler than the rubber hose suggested in another answer. :-)
Here we just have a test folder, with a package structure mirroring the actual code. To check in a class, policy states it must have an accompanying testing class, with certain guidelines about which/how each method needs to be tested. (Example: We don't require pure getters and setters to be tested)
A quick glance at the testing folder shows when a class is missing, and the offending person can be beaten with a rubber hose (or whatever depending on your policy).
Go in and change a random variable or pass a null somewhere and you should expect to see a bunch of red. =D
One way to do it would be to write a script search all checkins for the term 'test' or 'testfixture' (obviously depending on your environment). If there is a commit log or an email sent out that to the manager that details changes made, then it'd be trivial with your favorite text processing language to scan the code files for signs of unit tests (the Assert keyword would probably be the best bet).
If there aren't unit tests, then during your next code review, take an example of a recent check-in, and spend ten minutes talking about the possible ways it could 'go wrong', and how unit tests would have found the error faster.
Have them submit a report or some sort of screen shot of the results of their unit tests on a regular basis. They can either fake it (more likely would take more time than actually doing the tests) or actually do them.
In the end, you are going to know the ones who are not doing the tests, they will be the ones with the bugs that could have been easily caught with the unit tests.
The issue is as much social as it is technical. If you have developers who "don't quite 'get' TDD yet" then helping them understand the benefits of TDD may be a better long-term solution than technical measures that "make" them write tests because it's required. Reluctant developers can easily write tests that meet code coverage criteria and yet aren't valuable tests.
One thing that should be mentioned here is that the you need a system for regularly running the unit tests. They should be part of your checkin guantlet or nightly build system. Merely making sure the unit tests are written doesn't ensure you are getting value out of them.
Sit down with them and observe what they do. If they don't unit test, remind them gently.

What would you include in a 10 min Grok talk on Unit Testing

I'm soon to do a 10min Grok talk on Unit Testing at my company. I've been trying it myself, and feel that it can certainly bring benefits to the company. We already do WebInject testing in our dedicated QA team, But I want to try and sell unit testing to the devs.
So with only 10mins what would you cover and why?
we're a Microsoft Shop C# Web Apps, I've used NUnit in my experience.
Unit testing is all about confidence.
It allows you to be confident that your code is solid, and that other people can rely on it when they're writing their own parts of a system. If you can get across that unit testing will help to eliminate the trepidation that comes with the first release of a new system, I would hope that your audience will soon become very interested.
I'd start with a problem a lot of programmers might be familiar with: that is the fear of making a change to existing code because of the fear they might break something. How that prevents work from happening, or prevents it being done properly (because they're afraid to refactor) and so leads to having to rewrite everything every x years.
Unit Testing -> Refactoring -> Living Code.
Edit:
btw, I would Not lead with the 'all code without unit tests is legacy code' quote from Michael Feathers. It certainly made me feel defensive the first time I heard it. By the time people stop feeling affronted the 10 minutes will be over :-) (personally I think that quote is more true than it is helpful).
Here's a good format for a short talk on a technique X:
why you decided to try it X the first place
what you personally have gained from using X
what limitations you've noticed, things that X doesn't address
Don't "sell" or spend lots of time on the theory. Do prepare beforehand and point people to books, URLs of articles or tutorials that you think are most helpful. Those who are interested after your talk can look up the details on the Web.
Try to briefly talk about the aspect of Test Driven Development: write tests first and the interfaces as you go, then implement everything.
Maybe also about continuous integration, this means that as soon as you check something into your source control system, the project gets compiled and all the tests run so the developer knows immediately if he has done something wrong.
If there are any project managers in the audience, also be so fair to tell them that unit testing will make your project take 15-30 % more time, but it will be worth it in the long run.
You could mention that it will be a difficult learning curve, and it will feel like productivity is being impacted, but the benefits are worth it:
e.g. effectively the creation of an automated regression test suite, which in turn allows you to make bigger additions or modifications to existing without worrying that you are breaking some existing functionality.
Creation of production code will be slower, but this should be offset by the higher quality of the code, i.e. fewer bugs, which in the long run means overall higher productivity.
I think 10 minutes is enough to present a simple example of how unit testing can save you time.
Implement a class (you can TDD if you feel like it) and show how a unit test can catch a modification that breaks the class.
Also, you can highlight how you can be faster developing components if you test the isolatedly (i.e. you don't need to bring up your web application, log in, go to your functionality, test); you can just run your tests.
You might be able to perform this on a piece of code from your company- and maybe show how a unit test might have caught a bug you have had recently.
If you give a demonstration, do it on a working piece of code from a project that everyone is familiar with. Avoid contrived examples. Books on TDD are already full of them, and they don't really sell how TDD can work for a real project.
For the love of god, emphasize that unit tests are for testing "units" of logic. I hate looking at a QA suite of NUnit tests that nobody expected to have to maintain, where each "unit test" tests valid outputs for 150 (binary) input files and then shits itself if one fails, without telling you which one.
I would demonstrate:
The confidence it gives in code you produce
The confidence it gives when you change code because it passes the unit tests
The benefits of code coverage, no more "Oh that else statement was never tested!"
The benefits of running unit tests per each build on a CI platform like Hudson
FWIW we run the crappy visual studio testing via MSTEST on our Hudson box and I've got an xslt that Hudson uses to convert the results to the nunit format so Hudson can decipher them. Just putting that out there in case they want you to stick with a Microsoft testing platform.
Accountability, as highlighted by Kent Beck, is another trait that unit testing facilitates. Listen to his podcast at IT Conversations. (His point on accountability starts at 30:34.)
From a business perspective, you may want to highlight the fact that unit tests can "de-risk" any changes you make to your code. Once you have a suite of unit tests, you can make changes to the code base and know what breaks and what doesn't.
I might not be a bad idea to go over user testing. If you have a good set of tests, you can bring failing tests to the users after you make changes to have them validate that the new results are correct. Additionally, you can streamline requirements gathering if you have the users write new unit test definitions for you. They don't need to be able to code, but they do need to be able to give you the appropriate inputs and expected outputs (otherwise how would they know if the changes they asked for were working?).
Visual Studio has a pretty nice set of tools for unit testing, so an example or two may go a long way toward giving your group an idea of what unit testing is like in practice.
Wear this t-shirt ;-)
Well prepared live demo:
Find a "bug" in your "application"
Write a unit test that covers this bug.
Fix this bug
Show, your code is green.
So you can prove, that there's no way, that this bug will occur once more!
Another way to do this:
Propose an issue that can be solved by creating an algorithm. Something relatively simple, of course. Next, code this algorithm in a DLL project. Try to sneak in some weaknesses (i <= array.Length is always a good one). Next, ask them how they would test this DLL.
Most devs run their apps to test them. But you can't run a DLL. You might get some suggesting you create a console app to create methods that exercise the algorithm. Show them how you can craft unit tests to do this.
Have a good set of resources for follow up/self directed learning:
the pragmatic unit test for java/c# are good books on the subject
the kent beck paper on unit testing
links to any larger samples using the testing framework of choice

What is unit testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I saw many questions asking 'how' to unit test in a specific language, but no question asking 'what', 'why', and 'when'.
What is it?
What does it do for me?
Why should I use it?
When should I use it (also when not)?
What are some common pitfalls and misconceptions
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Note that if your test code writes to a file, opens a database connection or does something over the network, it's more appropriately categorized as an integration test. Integration tests are a good thing, but should not be confused with unit tests. Unit test code should be short, sweet and quick to execute.
Another way to look at unit testing is that you write the tests first. This is known as Test-Driven Development (TDD for short). TDD brings additional advantages:
You don't write speculative "I might need this in the future" code -- just enough to make the tests pass
The code you've written is always covered by tests
By writing the test first, you're forced into thinking about how you want to call the code, which usually improves the design of the code in the long run.
If you're not doing unit testing now, I recommend you get started on it. Get a good book, practically any xUnit-book will do because the concepts are very much transferable between them.
Sometimes writing unit tests can be painful. When it gets that way, try to find someone to help you, and resist the temptation to "just write the damn code". Unit testing is a lot like washing the dishes. It's not always pleasant, but it keeps your metaphorical kitchen clean, and you really want it to be clean. :)
Edit: One misconception comes to mind, although I'm not sure if it's so common. I've heard a project manager say that unit tests made the team write all the code twice. If it looks and feels that way, well, you're doing it wrong. Not only does writing the tests usually speed up development, but it also gives you a convenient "now I'm done" indicator that you wouldn't have otherwise.
I don't disagree with Dan (although a better choice may just be not to answer)...but...
Unit testing is the process of writing code to test the behavior and functionality of your system.
Obviously tests improve the quality of your code, but that's just a superficial benefit of unit testing. The real benefits are to:
Make it easier to change the technical implementation while making sure you don't change the behavior (refactoring). Properly unit tested code can be aggressively refactored/cleaned up with little chance of breaking anything without noticing it.
Give developers confidence when adding behavior or making fixes.
Document your code
Indicate areas of your code that are tightly coupled. It's hard to unit test code that's tightly coupled
Provide a means to use your API and look for difficulties early on
Indicates methods and classes that aren't very cohesive
You should unit test because its in your interest to deliver a maintainable and quality product to your client.
I'd suggest you use it for any system, or part of a system, which models real-world behavior. In other words, it's particularly well suited for enterprise development. I would not use it for throw-away/utility programs. I would not use it for parts of a system that are problematic to test (UI is a common example, but that isn't always the case)
The greatest pitfall is that developers test too large a unit, or they consider a method a unit. This is particularly true if you don't understand Inversion of Control - in which case your unit tests will always turn into end-to-end integration testing. Unit test should test individual behaviors - and most methods have many behaviors.
The greatest misconception is that programmers shouldn't test. Only bad or lazy programmers believe that. Should the guy building your roof not test it? Should the doctor replacing a heart valve not test the new valve? Only a programmer can test that his code does what he intended it to do (QA can test edge cases - how code behaves when it's told to do things the programmer didn't intend, and the client can do acceptance test - does the code do what what the client paid for it to do)
The main difference of unit testing, as opposed to "just opening a new project and test this specific code" is that it's automated, thus repeatable.
If you test your code manually, it may convince you that the code is working perfectly - in its current state. But what about a week later, when you made a slight modification in it? Are you willing to retest it again by hand whenever anything changes in your code? Most probably not :-(
But if you can run your tests anytime, with a single click, exactly the same way, within a few seconds, then they will show you immediately whenever something is broken. And if you also integrate the unit tests into your automated build process, they will alert you to bugs even in cases where a seemingly completely unrelated change broke something in a distant part of the codebase - when it would not even occur to you that there is a need to retest that particular functionality.
This is the main advantage of unit tests over hand testing. But wait, there is more:
unit tests shorten the development feedback loop dramatically: with a separate testing department it may take weeks for you to know that there is a bug in your code, by which time you have already forgotten much of the context, thus it may take you hours to find and fix the bug; OTOH with unit tests, the feedback cycle is measured in seconds, and the bug fix process is typically along the lines of an "oh sh*t, I forgot to check for that condition here" :-)
unit tests effectively document (your understanding of) the behaviour of your code
unit testing forces you to reevaluate your design choices, which results in simpler, cleaner design
Unit testing frameworks, in turn, make it easy for you to write and run your tests.
I was never taught unit testing at university, and it took me a while to "get" it. I read about it, went "ah, right, automated testing, that could be cool I guess", and then I forgot about it.
It took quite a bit longer before I really figured out the point: Let's say you're working on a large system and you write a small module. It compiles, you put it through its paces, it works great, you move on to the next task. Nine months down the line and two versions later someone else makes a change to some seemingly unrelated part of the program, and it breaks the module. Worse, they test their changes, and their code works, but they don't test your module; hell, they may not even know your module exists.
And now you've got a problem: broken code is in the trunk and nobody even knows. The best case is an internal tester finds it before you ship, but fixing code that late in the game is expensive. And if no internal tester finds it...well, that can get very expensive indeed.
The solution is unit tests. They'll catch problems when you write code - which is fine - but you could have done that by hand. The real payoff is that they'll catch problems nine months down the line when you're now working on a completely different project, but a summer intern thinks it'll look tidier if those parameters were in alphabetical order - and then the unit test you wrote way back fails, and someone throws things at the intern until he changes the parameter order back. That's the "why" of unit tests. :-)
Chipping in on the philosophical pros of unit testing and TDD here are a few of they key "lightbulb" observations which struck me on my tentative first steps on the road to TDD enlightenment (none original or necessarily news)...
TDD does NOT mean writing twice the amount of code. Test code is typically fairly quick and painless to write and is a key part of your design process and critically.
TDD helps you to realize when to stop coding! Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing.
The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of BOTH being bad / buggy being fairly low. Often its the test that needs fixing but that's still a good outcome.
TDD helps with coding constipation. You know that feeling that you have so much to do you barely know where to start? It's Friday afternoon, if you just procrastinate for a couple more hours... TDD allows you to flesh out very quickly what you think you need to do, and gets your coding moving quickly. Also, like lab rats, I think we all respond to that big green light and work harder to see it again!
In a similar vein, these designer types can SEE what they're working on. They can wander off for a juice / cigarette / iphone break and return to a monitor that immediately gives them a visual cue as to where they got to. TDD gives us something similar. It's easier to see where we got to when life intervenes...
I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interprete this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete.
TDD helps in all kinds of surprising ways down the line. Good unit tests can help document what something is supposed to do, they can help you migrate code from one project to another and give you an unwarranted feeling of superiority over your non-testing colleagues :)
This presentation is an excellent introduction to all the yummy goodness testing entails.
I would like to recommend the xUnit Testing Patterns book by Gerard Meszaros. It's large but is a great resource on unit testing. Here is a link to his web site where he discusses the basics of unit testing. http://xunitpatterns.com/XUnitBasics.html
I use unit tests to save time.
When building business logic (or data access) testing functionality can often involve typing stuff into a lot of screens that may or may not be finished yet. Automating these tests saves time.
For me unit tests are a kind of modularised test harness. There is usually at least one test per public function. I write additional tests to cover various behaviours.
All the special cases that you thought of when developing the code can be recorded in the code in the unit tests. The unit tests also become a source of examples on how to use the code.
It is a lot faster for me to discover that my new code breaks something in my unit tests then to check in the code and have some front-end developer find a problem.
For data access testing I try to write tests that either have no change or clean up after themselves.
Unit tests aren’t going to be able to solve all the testing requirements. They will be able to save development time and test core parts of the application.
This is my take on it. I would say unit testing is the practice of writing software tests to verify that your real software does what it is meant to. This started with jUnit in the Java world and has become a best practice in PHP as well with SimpleTest and phpUnit. It's a core practice of Extreme Programming and helps you to be sure that your software still works as intended after editing. If you have sufficient test coverage, you can do major refactoring, bug fixing or add features rapidly with much less fear of introducing other problems.
It's most effective when all unit tests can be run automatically.
Unit testing is generally associated with OO development. The basic idea is to create a script which sets up the environment for your code and then exercises it; you write assertions, specify the intended output that you should receive and then execute your test script using a framework such as those mentioned above.
The framework will run all the tests against your code and then report back success or failure of each test. phpUnit is run from the Linux command line by default, though there are HTTP interfaces available for it. SimpleTest is web-based by nature and is much easier to get up and running, IMO. In combination with xDebug, phpUnit can give you automated statistics for code coverage which some people find very useful.
Some teams write hooks from their subversion repository so that unit tests are run automatically whenever you commit changes.
It's good practice to keep your unit tests in the same repository as your application.
LibrarIES like NUnit, xUnit or JUnit are just mandatory if you want to develop your projects using the TDD approach popularized by Kent Beck:
You can read Introduction to Test Driven Development (TDD) or Kent Beck's book Test Driven Development: By Example.
Then, if you want to be sure your tests cover a "good" part of your code, you can use software like NCover, JCover, PartCover or whatever. They'll tell you the coverage percentage of your code. Depending on how much you're adept at TDD, you'll know if you've practiced it well enough :)
Unit-testing is the testing of a unit of code (e.g. a single function) without the need for the infrastructure that that unit of code relies on. i.e. test it in isolation.
If, for example, the function that you're testing connects to a database and does an update, in a unit test you might not want to do that update. You would if it were an integration test but in this case it's not.
So a unit test would exercise the functionality enclosed in the "function" you're testing without side effects of the database update.
Say your function retrieved some numbers from a database and then performed a standard deviation calculation. What are you trying to test here? That the standard deviation is calculated correctly or that the data is returned from the database?
In a unit test you just want to test that the standard deviation is calculated correctly. In an integration test you want to test the standard deviation calculation and the database retrieval.
Unit testing is about writing code that tests your application code.
The Unit part of the name is about the intention to test small units of code (one method for example) at a time.
xUnit is there to help with this testing - they are frameworks that assist with this. Part of that is automated test runners that tell you what test fail and which ones pass.
They also have facilities to setup common code that you need in each test before hand and tear it down when all tests have finished.
You can have a test to check that an expected exception has been thrown, without having to write the whole try catch block yourself.
I think the point that you don't understand is that unit testing frameworks like NUnit (and the like) will help you in automating small to medium-sized tests. Usually you can run the tests in a GUI (that's the case with NUnit, for instance) by simply clicking a button and then - hopefully - see the progress bar stay green. If it turns red, the framework shows you which test failed and what exactly went wrong. In a normal unit test, you often use assertions, e.g. Assert.AreEqual(expectedValue, actualValue, "some description") - so if the two values are unequal you will see an error saying "some description: expected <expectedValue> but was <actualValue>".
So as a conclusion unit testing will make testing faster and a lot more comfortable for developers. You can run all the unit tests before committing new code so that you don't break the build process of other developers on the same project.
Use Testivus. All you need to know is right there :)
Unit testing is a practice to make sure that the function or module which you are going to implement is going to behave as expected (requirements) and also to make sure how it behaves in scenarios like boundary conditions, and invalid input.
xUnit, NUnit, mbUnit, etc. are tools which help you in writing the tests.
Test Driven Development has sort of taken over the term Unit Test. As an old timer I will mention the more generic definition of it.
Unit Test also means testing a single component in a larger system. This single component could be a dll, exe, class library, etc. It could even be a single system in a multi-system application. So ultimately Unit Test ends up being the testing of whatever you want to call a single piece of a larger system.
You would then move up to integrated or system testing by testing how all the components work together.
First of all, whether speaking about Unit testing or any other kinds of automated testing (Integration, Load, UI testing etc.), the key difference from what you suggest is that it is automated, repeatable and it doesn't require any human resources to be consumed (= nobody has to perform the tests, they usually run at a press of a button).
I went to a presentation on unit testing at FoxForward 2007 and was told never to unit test anything that works with data. After all, if you test on live data, the results are unpredictable, and if you don't test on live data, you're not actually testing the code you wrote. Unfortunately, that's most of the coding I do these days. :-)
I did take a shot at TDD recently when I was writing a routine to save and restore settings. First, I verified that I could create the storage object. Then, that it had the method I needed to call. Then, that I could call it. Then, that I could pass it parameters. Then, that I could pass it specific parameters. And so on, until I was finally verifying that it would save the specified setting, allow me to change it, and then restore it, for several different syntaxes.
I didn't get to the end, because I needed-the-routine-now-dammit, but it was a good exercise.
What do you do if you are given a pile of crap and seem like you are stuck in a perpetual state of cleanup that you know with the addition of any new feature or code can break the current set because the current software is like a house of cards?
How can we do unit testing then?
You start small. The project I just got into had no unit testing until a few months ago. When coverage was that low, we would simply pick a file that had no coverage and click "add tests".
Right now we're up to over 40%, and we've managed to pick off most of the low-hanging fruit.
(The best part is that even at this low level of coverage, we've already run into many instances of the code doing the wrong thing, and the testing caught it. That's a huge motivator to push people to add more testing.)
This answers why you should be doing unit testing.
The 3 videos below cover unit testing in javascript but the general principles apply across most languages.
Unit Testing: Minutes Now Will Save Hours Later - Eric Mann - https://www.youtube.com/watch?v=_UmmaPe8Bzc
JS Unit Testing (very good) - https://www.youtube.com/watch?v=-IYqgx8JxlU
Writing Testable JavaScript - https://www.youtube.com/watch?v=OzjogCFO4Zo
Now I'm just learning about the subject so I may not be 100% correct and there's more to it than what I'm describing here but my basic understanding of unit testing is that you write some test code (which is kept separate from your main code) that calls a function in your main code with input (arguments) that the function requires and the code then checks if it gets back a valid return value. If it does get back a valid value the unit testing framework that you're using to run the tests shows a green light (all good) if the value is invalid you get a red light and you then can fix the problem straight away before you release the new code to production, without testing you may actually not have caught the error.
So you write tests for you current code and create the code so that it passes the test. Months later you or someone else need to modify the function in your main code, because earlier you had already written test code for that function you now run again and the test may fail because the coder introduced a logic error in the function or return something completely different than what that function is supposed to return. Again without the test in place that error might be hard to track down as it can possibly affect other code as well and will go unnoticed.
Also the fact that you have a computer program that runs through your code and tests it instead of you manually doing it in the browser page by page saves time (unit testing for javascript). Let's say that you modify a function that is used by some script on a web page and it works all well and good for its new intended purpose. But, let's also say for arguments sake that there is another function you have somewhere else in your code that depends on that newly modified function for it to operate properly. This dependent function may now stop working because of the changes that you've made to the first function, however without tests in place that are run automatically by your computer you will not notice that there's a problem with that function until it is actually executed and you'll have to manually navigate to a web page that includes the script which executes the dependent function, only then you notice that there's a bug because of the change that you made to the first function.
To reiterate, having tests that are run while developing your application will catch these kinds of problems as you're coding. Not having the tests in place you'd have to manually go through your whole application and even then it can be hard to spot the bug, naively you send it out into production and after a while a kind user sends you a bug report (which won't be as good as your error messages in a testing framework).
It's quite confusing when you first hear of the subject and you think to yourself, am I not already testing my code? And the code that you've written is working like it is supposed to already, "why do I need another framework?"... Yes you are already testing your code but a computer is better at doing it. You just have to write good enough tests for a function/unit of code once and the rest is taken care of for you by the mighty cpu instead of you having to manually check that all of your code is still working when you make a change to your code.
Also, you don't have to unit test your code if you don't want to but it pays off as your project/code base starts to grow larger as the chances of introducing bugs increases.
Unit-testing and TDD in general enables you to have shorter feedback cycles about the software you are writing. Instead of having a large test phase at the very end of the implementation, you incrementally test everything you write. This increases code quality very much, as you immediately see, where you might have bugs.