When should you stop unit testing - unit-testing

I can often identify plenty of areas which are nicely encapsulated and easily unit tested, but I also find that a lot of code where unit testing doesn't really seem to work as well - typically data access and user interface. No matter what unit testing "techniques" I try I tend to find that in these places it's not only a lot of effort to create functioning unit tests, but these tests tend to be very fragile and don't really test very much.
At what point do you stop and decide that the benefits of unit testing isn't worth the cost?

When you can provide better value by doing something else.

I tend to test only the model and data persistence. testing the model is mandatory. UI (desktop application, webapps, command-line interface, etc) is hard to test, so I write test for it only in rare circustances.

Usually I only test the model and the controller. Unit test are hard to apply for UI, usually i prefer manually test the app.
If creating these test cost more time than manually test everytime you might have a regression, then the test is useless (easy to say, but not to evaluate...).

If you need to cut out testing, cut out integration or end-to-end testing. Misko Hevery at Google explains it really well here.
"Unit Testing gives you more bang for
your buck"
is the best quote to come out of his article.
Other than that, when you have decent code coverage and are handling a few edge cases of your code then its a good time to stop unit testing.

If you have the access to some live data you can use it to unit test. also u can use data generators and random data. Unit tests only give you some level of confidence that it wont create problems in the future. If you are confident with your testing you can discontinue unit tests

I like the title of the question. Apart from that I think it is a dupe of
Is there such a thing as excessive unit testing?

I would say when you've gained an acceptable level of confidence. Also, like for my project at work, we are under such tight time requirements that I really have to only test certain parts of code (not all of it) just to give me a good confidence level.

As far as data access testing goes have you tried mock tests to simulate responses.

The basic rule of thumb i'd follow is if the effort to build the unit test is more than the effort to repeatedly manually test the feature by human work.
If you look at the Test projects in the Visual Studio Team edition for Testers, there is such an item called a "Manual Test" which is essentially an instruction document to tell a human how to carry out the test and manually pass it. Certain things, like you mention UI testing, or code to workaround obscurely odd or buggy hardware behaviour in the underlying framework or OS or driver, are better verified by human eyes.

If you are using TDD, then you stop unit testing when all the tests in the test list have succeeded.
Otherwise, you stop unit testing when the cost of finding more bugs through unit testing exceeds the cost of finding them through your QA process. and when you've reached an acceptable level of code coverage through the combination of all tests.

When there isn't time in the project plan for it and time is being spent finding ways of testing rather than working towards the goal of the project.

Related

Creating easy to maintain tests

During a proposal about implementing unit tests for our projects, my manager argued that it could be more expensive to create unit tests since you will be maintaining two sets of code. His argument is whenever there is a change in requirement/functionality, the unit tests can be obsolete, hence the developers should update the tests in order to pass the automated build. He said that it can be inconvenient for the developers since their coding time might be reduced and spent fixing the tests.
The reason I wanted to implement unit tests is to minimize bug occurence especially the critical ones before the codes are turned to validation team for functional test. I also believe that the cost of creating unit tests can be recovered by having better quality systems.
Right now, his challenge to me is to create easy to maintain tests that would be easy to modify or if possible tests that will not break easily. I am using xUnit testing frameworks like JUnit and mocking frameworks like mockito and powermock to help in testing.
I'm looking for tips and techniques on how to write easy to maintain tests or how to avoid brittle test. Are there other tools which can come handy in creating such tests. I'm writing codes in Java and C++. Thanks.
I think you're facing a difference in culture - your manager fears the potential time sink that testing provides. It is a known fact that a TDD/BDD process is more expensive up-front - but as time goes on you start to reap the rewards as "changing just this one isolated thing..." - no longer throws up painful/embarrassing/costly bugs.
My suggestion is that you do some research and put together a document that tries to sell the process to your manager, putting forwards a business case based on what has happened in your business already that could/would have been solved with a solid test suite.
There is one book that goes into TDD better than any website, article etc I've ever seen. I'd highly recommend it as reading for anyone wanting to practise TDD/BDD/OOP:
http://www.growing-object-oriented-software.com/ (I don't earn any money from linking this! - but it is a superb addition to my desk!).
From my experience, it is almost impossible to truly convince a "Unit Testing Skeptic".
My advice: Start adding tests and increase coverage on a selection of the most important and frequently changing parts of your product. Do this on your own time, and possibly with an "accomplice", and time your work.
After that, show the skeptics examples of regression bugs caught by these tests, and how your time spent was actually worth it. If you succeed in doing so, management will see value in devoting resources for unit tests.
As per the technical challenge, I agree with other answers here that practice makes perfect, but there are some guidelines that could help you:
Test only one thing per test. If you have 100 tests that assert the same condition, when this condition changes, you'll have to update 100 tests.
If your "Class Under Test" is huge, with a ton of logic in it, it will be very hard to test it. Try to refactor your classes into small, coherent units of logic. When one of these units will change, the number of broken tests will be relatively small.
Test your public interface, not implementation details. This will allow developers to fix bugs, improve performance, and refactor, with minimal effect on tests.
There are many more guidelines on the first page of Google's "unit testing guidelines", and you can read The Art Of Unit Testing for an extensive coverage on writing good unit tests.
Good luck!
Whether you write unit tests or not should not concern your management. What usually matters is that functionality is implemented fast and with no bugs. How you achieve it is up to you, developers; unit testing is just one method. It works best for loosely coupled code where you can easily mock out dependencies.
But writing good unit tests is not just about decision to write them, or tools you use, it is mostly about experience which you can gain only by practicing. There are no simple recipes that will let you write good unit tests, as there are no for code. If you just force people with no experience to write unit tests, surely productivity will slump, and there will be no apparent benefits. Ultimately, you should be writing unit tests because you believe it helps you, not because someone else thinks it should help you.
The answer is that the unit tests should test implementation and not functionality. So if the developers refactor the code, nothing should change with the unit tests because they aren't testing the internals, just the results.
Of course, if you change the interface or the behavior drastically, of course the tests will change, but then you will want to be testing the new code ANYWAY, so you'd still be writing tests.
Long story short, there is a lot of research out there that a good test suite saves time in the long run by a huge margin.
Having unified test-datafactories may reduce maintanance cost if system-under-test changes.
tests offten have similar test-setups with only small variations.
when i started unittesting i used copy&paste to create a new test from an existing.
every test had a long test-setup.
after changes in the system-under-test i had to update many tests.
today i use test-datafactories where only the difference to the standard is assigned.
example
void customerUnder18ShouldNotbeGrantedAccess() {
// Arrange
Customer customer = TestdataFactory.CreateStandardCustomer();
customer.setAge(16);
// Act...
// Assert ..
}
Here are more usefull tips.

Why using Integration tests instead of unit tests is a bad idea?

Let me start from definition:
Unit Test is a software verification and validation method in which a programmer tests if individual units of source code are fit for use
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Although they serve different purposes very often these terms are mixed up. Developers refer to automated integration tests as unit tests. Also some argue which one is better which seems to me as a wrong question at all.
I would like to ask development community to share their opinions on why automated integration tests cannot replace classic unit tests.
Here are my own observations:
Integration tests can not be used with TDD approach
Integration tests are slow and can not be executed very often
In most cases integration tests do not indicate the source of the problem
it's more difficult to create test environment with integration tests
it's more difficult to ensure high coverage (e.g. simulating special cases, unexpected failures etc)
Integration tests can not be used with Interaction based testing
Integration tests move moment of discovering defect further (from paxdiablo)
EDIT: Just to clarify once again: the question is not about whether to use integration or unit testing and not about which one is more useful. Basically I want to collect arguments to the development teams which write ONLY integration tests and consider them as unit tests.
Any test which involve components from different layers is considered as integration test. This is to compare to unit test where isolation is the main goal.
Thank you,
Andrey
Integration tests tell you whether it's working. Unit tests tell you what isn't working. So long as everything is working, you "don't need" the unit tests - but once something is wrong, it's very nice to have the unit test point you directly to the problem. As you say, they serve different purposes; it's good to have both.
To directly address your subject: integration tests aren't a problem, aren't the problem. Using them instead of unit tests is.
There have been studies(a) that show that the cost of fixing a bug becomes higher as you move away from the point where the bug was introduced.
For example, it will generally cost you relatively little to fix a bug in software you haven't even pushed up to source control yet. It's your time and not much of it, I'd warrant (assuming you're any good at your job).
Contrast that with how much it costs to fix when the customer (or all your customers) find that problem. Many level of people get involved and new software has to be built in a hurry and pushed out to the field.
That's the extreme comparison. But even the difference between unit and integration tests can be apparent. Code that fails unit testing mostly affects only the single developer (unless other developers/testers/etc are waiting on it, of course). However, once your code becomes involved in integration testing, a defect can begin holding up other people on your team.
We wouldn't dream of replacing our unit tests with integration tests since:
Our unit tests are automated as well so, other than initial set-up, the cost of running them is small.
They form the beginning of the integration tests. All unit tests are rerun in the integration phase to check that the integration itself hasn't broken anything, and then there are the extra tests that have been added by the integration team.
(a) See, for example, http://slideshare.net/Vamsipothuri/defect-prevention, slide # 5, or search the net for Defect prevention : Reducing costs and enhancing quality. Th graph from the chart is duplicated below in case it ever becomes hard to find on the net:
I find integration tests markedly superior to unit tests. If I unit test my code, I'm only testing what it does versus my understanding of what it should do. That only catches implementation errors. But often a much bigger problem is errors of understanding. Integration tests catch both.
In addition, there is a dramatic cost difference; if you're making intensive use of unit tests, it's not uncommon for them to outweigh all the rest of your code put together. And they need to be maintained, just like the rest of the code does. Integration tests are vastly cheaper -- and in most cases, you already need them anyway.
There are rare cases where it might be necessary to use unit tests, e.g. for internal error handling paths that can't be triggered if the rest of the system is working correctly, but most of the time, integration tests alone give better results for far lower cost.
Integration tests are slow.
Integration tests may break different
reasons (it is not focused and
isolated). Therefore you need more
debugging on failures.
Combination of
scenarios are to big for integration
test when it is not unit tested.
Mostly I do unit tests and 10 times less integration tests (configuration, queries).
In many cases you need both. Your observations are right on track as far as I'm concerned with respect to using integration tests as unit tests, but they don't mean that integration tests are not valuable or needed, just that they serve a different purpose. One could equally argue that unit tests can't replace integration tests, precisely because they remove the dependencies between objects and they don't exercise the real environment. Both are correct.
It's all about reducing the iteration time.
With unit tests, you can write a line of code and verify it in a minute or so. With integration tests, it usually takes significantly longer (and the cost increases as the project grows).
Both are clearly useful, as both will detect issues that the other fails to detect.
OTOH, from a "pure" TDD approach, unit tests aren't tests, they're specifications of functionality. Integration tests, OTOH, really do "test" in the more traditional sense of the word.
Integration testing generally happens after unit testing. I'm not sure what value there is in testing interactions between units that have not themselves been tested.
There's no sense in testing how the gears of a machine turn together if the gears might be broken.
The two types of tests are different. Unit tests, in my opinion are not a alternative to integration tests. Mainly because integration tests are usually context specific. You may well have a scenario where a unit test fails and your integration doesn't and vice versa. If you implement incorrect business logic in a class that utilizes many other components, you would want your integration tests to highlight these, your unit tests are oblivious to this.I understand that integration testing is quick and easy. I would argue you rely on your unit tests each time you make a change to your code-base and having a list of greens would give you more confidence that you have not broken any expected behavior at the individual class level. Unit tests give you a test against a single class is doing what it was designed to do. Integration tests test that a number of classes working together do what you expect them to do for that particular collaboration instance. That is the whole idea of OO development: individual classes that encapsulate particular logic, which allows for reuse.
I think coverage is the main issue.
A unit test of a specific small component such as a method or at most a class is supposed to test that component in every legal scenario (of course, one abstracts equivalence classes but every major one should be covered). As a result, a change that breaks the established specification should be caught at this point.
In most cases, an integration uses only a subset of the possible scenarios for each subunit, so it is possible for malfunctioning units to still produce a program that initially integrates well.
It is typically difficult to achieve maximal coverage on the integration testing for all the reasons you specified below. Without unit tests, it is more likely that a change to a unit that essentially operates it in a new scenario would not be caught and might be missed in the integration testing. Even if it is not missed, pinpointing the problem may be extremely difficult.
I am not sure that most developers refer to unit tests as integration tests. My impression is that most developers understand the differences, which does not mean they practice either.
A unit test is written to test a method on a class. If that class depends on any kind of external resource or behavior, you should mock them, to ensure you test just your single class. There should be no external resources in a unit test.
An integration test is a higher level of granularity, and as you stated, you should test multiple components to check if they work together as expected. You need both integration tests and unit tests for most projects. But it is important they are kept separate and the difference is understood.
Unit tests, in my opinion, are more difficult for people to grasp. It requires a good knowledge of OO principles (fundamentally based on one class one responsibility). If you are able to test all your classes in isolation, chances are you have a well design solution which is maintainable, flexible and extendable.
When you check-in, your build server should only run unit tests and
they should be done in a few seconds, not minutes or hours.
Integration tests should be ran overnight or manually as needed.
Unit tests focus on testing an individual component and do not rely on external dependencies. They are commonly used with mocks or stubs.
Integration tests involve multiple components and may rely on external dependencies.
I think both are valuable and neither one can replace the other in the job they do. I do see a lot of integration tests masquerading as unit tests though having dependencies and taking a long time to run. They should function separately and as part of a continuous integration system.
Integration tests do often find things that unit tests do not though...
Integration tests let you check that whole use cases of your application work.
Unit tests check that low-level logic in your application is correct.
Integration tests are more useful for managers to feel safer about the state of the project (but useful for developers too!).
Unit tests are more useful for developers writing and changing application logic.
And of course, use them both to achieve best results.
It is a bad idea to "use integration tests instead of unit tests" because it means you aren't appreciating that they are testing different things, and of course passing and failing tests will give you different information. They make up sort of a ying and yang of testing as they approach it from either side.
Integration tests take an approach that simulates how a user would interact with the application. These will cut down on the need for as much manual testing, and passing tests will can tell you that you app is good to go on multiple platforms. A failing test will tell you that something is broken but often doesn't give you a whole lot of information about what's wrong with the underlying code.
Unit tests should be focusing on making sure the inputs and outputs of your function are what you expect them to be in all cases. Passing units tests can mean that your functions are working according to spec (assuming you have tests for all situations). However, all your functions working properly in isolation doesn't necessarily mean that everything will work perfectly when it's deployed. A failing unit test will give you detailed, specific information about why it's failing which should in theory make it easier to debug.
In the end I believe a combination of both unit and integration tests will yield the quickest a most bug-free software. You could choose to use one and not the other, but I avoid using the phrase "instead of".
How I see integration testing & unit testing:
Unit Testing: Test small things in isolation with low level details including but not limited to 'method conditions', checks, loops, defaulting, calculations etc.
Integration testing: Test wider scope which involves number of components, which can impact the behaviour of other things when married together. Integration tests should cover end to end integration & behaviours. The purpose of integration tests should be to prove systems/components work fine when integrated together.
(I think) What is referred here by OP as integration tests are leaning more to scenario level tests.
But where do we draw the line between unit -> integration -> scenario?
What I often see is developers writing a feature and then when unit testing it mocking away every other piece of code this feature uses/consumes and only test their own feature-code because they think someone else tested that so it should be fine. This helps code coverage but can harm the application in general.
In theory the small isolation of Unit Test should cover a lot since everything is tested in its own scope. But such tests are flawed and do not see the complete picture.
A good Unit test should try to mock as least as possible. Mocking API and persistency would be something for example. Even if the application itself does not use IOC (Inversion Of Control) it should be easy to spin up some objects for a test without mocking if every developer working on the project does it as well it gets even easier. Then the test are useful. These kind of tests have an integration character to them aren't as easy to write but help you find design flaws of your code. If it is not easy to test then adapt your code to make it easy to test. (TDD)
Pros
Fast issue identification
Helps even before a PR merge
Simple to implement and maintain
Providing a lot of data for code quality checking (e.g. coverage etc.)
Allows TDD (Test Driven Development)
Cons
Misses scenario integration errors
Succumbs to developer blindness in their own code(happens to all of us)
A good integration test would be executed for complete end to end scenarios and even check persistency and APIs which the unit test could not cover so you might know where to look first when those fail.
Pros:
Test close to real world e2e scenario
Finds Issues that developers did not think about
Very helpful in microservices architectures
Cons:
Most of the time slow
Need often a rather complex setup
Environment (persistency and api) pollution issues (needs cleanup steps)
Mostly not feasible to be used on PR's (Pull Requests)
TLDR: You need both you cant replace one with the other! The question is how to design such tests to get the best from both. And not just have them to show good statistics to the management.

Is there such a thing as too much unit testing?

I tried looking through all the pages about unit tests and could not find this question. If this is a duplicate, please let me know and I will delete it.
I was recently tasked to help implement unit testing at my company. I realized that I could unit test all the Oracle PL/SQL code, Java code, HTML, JavaScript, XML, XSLT, and more.
Is there such a thing as too much unit testing? Should I write unit tests for everything above or is that overkill?
This depends on the project and its tolerance for failure. There is no single answer. If you can risk a bug, then don't test everything.
When you have tons of tests, it is also likely you will have bugs in your tests. Adding to your headaches.
test what needs testing, leave what does not which often leaves the fairly simple stuff.
Is there such as thing as too much unit testing?
Sure. The problem is finding the right balance between enough unit testing to cover the important areas of functionality, and focusing effort on creating new value for your customers in the terms of system functionality.
Unit testing code vs. leaving code uncovered by tests both have a cost.
The cost of excluding code from unit testing may include (but aren't limited to):
Increased development time due to fixing issues you can't automatically test
Fixing problems discovered during QA testing
Fixing problems discovered when the code reaches your customers
Loss of revenue due to customer dissatisfaction with defects that made it through testing
The costs of writing a unit test include (but aren't limited to):
Writing the original unit test
Maintaining the unit test as your system evolves
Refining the unit test to cover more conditions as you discover them in testing or production
Refactoring unit tests as the underlying code under test is refactored
Lost revenue when it takes longer for you application to reach enter the market
The opportunity cost of implementing features that could drive sales
You have to make your best judgement about what these costs are likely to be, and what your tolerance is for absorbing such costs.
In general, unit testing costs are mostly absorbed during the development phase of a system - and somewhat during it's maintenance. If you spend too much time writing unit tests you may miss a valuable window of opportunity to get your product to market. This could cost you sales or even long-term revenue if you operate in a competitive industry.
The cost of defects is absorbed during the entire lifetime of your system in production - up until the point the defect is corrected. And potentially, even beyond that, if they defect is significant enough that it affects your company's reputation or market position.
Kent Beck of JUnit and JUnitMax fame answered a similar question of mine.
The question has slightly different semantics but the answer is definitely relevant
The purpose of Unit tests is generally to make it possibly to refector or change with greater assurance that you did not break anything. If a change is scary because you do not know if you will break anything, you probably need to add a test. If a change is tedious because it will break a lot of tests, you probably have too many test (or too fragile a test).
The most obvious case is the UI. What makes a UI look good is something that is hard to test, and using a master example tends to be fragile. So the layer of the UI involving the look of something tends not to be tested.
The other times it might not be worth it is if the test is very hard to write and the safety it gives is minimal.
For HTML I tended to check that the data I wanted was there (using XPath queries), but did not test the entire HTML. Similarly for XSLT and XML. In JavaScript, when I could I tested libraries but left the main page alone (except that I moved most code into libraries). If the JavaScript is particularly complicated I would test more. For databases I would look into testing stored procedures and possibly views; the rest is more declarative.
However, in your case first start with the stuff that worries you the most or is about to change, especially if it is not too difficult to test. Check the book Working Effectively with Legacy Code for more help.
Yes, there is such a thing as too much unit testing. One example would be unit testing in a whitebox manner, such that you're effectively testing the specific implementation; such testing would effectively slow down progress and refactoring by requiring compliant code to need new unit tests (because the tests were dependent upon specific implementation details).
I suggest that in some situations you might want automated testing, but no 'unit' testing at all (Should one test internal implementation, or only test public behaviour?), and that any time spent writing unit tests would be better spent writing system tests.
While more tests is usually better (I have yet to be on a project that actually had too many tests), there's a point at which the ROI bottoms out, and you should move on. I'm assuming you have finite time to work on this project, by the way. ;)
Adding unit tests has some amount of diminishing returns -- after a certain point (Code Complete has some theories), you're better off spending your finite amount of time on something else. That may be more testing/quality activities like refactoring and code review, usability testing with real human users, etc., or it could be spent on other things like new features, or user experience polish.
As EJD said, you can't verify the absence of errors.
This means there are always more tests you could write. Any of these could be useful.
What you need to understand is that unit-testing (and other types of automated testing you use for development purposes) can help with development, but should never be viewed as a replacement for formal QA.
Some tests are much more valuable than others.
There are parts of your code that change a lot more frequently, are more prone to break, etc. These are the most economical tests.
You need to balance out the amount of testing you agree to take on as a developer. You can easily overburden yourself with unmaintainable tests. IMO, unmaintainable tests are worse than no tests because they:
Turn others off from trying to maintain a test suite or write new tests.
Detract from you adding new, meaningful functionality. If automated testing is not a net-positive result, you should ditch it like other engineering practices.
What should I test?
Test the "Happy Path" - this ensures that you get interactions right, and that things are wired together properly. But you don't adequately test a bridge by driving down it on a sunny day with no traffic.
Pragmatic Unit Testing recommends you use Right-BICEP to figure out what to test. "Right" for the happy path, then Boundary conditions, check any Inverse relationships, use another method (if it exists) to Cross-check results, force Error conditions, and finally take into account any Performance considerations that should be verified. I'd say if you are thinking about tests to write in this way, you're most likely figure out how to get to an adequate level of testing. You'll be able to figure out which ones are more useful and when. See the book for much more info.
Test at the right level
As others have mentioned, unit tests are not the only way to write automated tests. Other types of frameworks may be built off of unit tests, but provide mechanisms to do package level, system or integration tests. The best bang for the buck may be at a higher level, and just using unit testing to verify a single component's happy path.
Don't be discouraged
I'm painting a more grim picture here than I expect most developers will find in reality. The bottom line is that you make a commitment to learn how to write tests and write them well. But don't let fear of the unknown scare you into not writing any tests. Unlike production code, tests can be ditched and rewritten without many adverse effects.
Unit test any code that you think might change.
You should only really write unit tests for any code which you have written yourself. There is no need to test the functionality inherently provided to you.
For example, If you've been given a library with an add function, you should not be testing that add(1,2) returns 3. Now if you've WRITTEN that code, then yes, you should be testing it.
Of course, whoever wrote the library may not have tested it and it may not work... in which case you should write it yourself or get a separate one with the same functionality.
Well, you certainly shouldn't unit test everything, but at least the complicated tasks or those that will most likely contain errors/cases you haven't thought of.
The point of unit testing is being able to run a quick set of tests to verify that your code is correct. This lets you verify that your code matches your specification and also lets you make changes and ensure that they don't break anything.
Use your judgement. You don't want to spend all of your time writing unit tests or you won't have any time to write actual code to test.
When you've unit tested your unit tests, thinking you have then provided 200% coverage.
There is a development approach called test-driven development which essentially says that there is no such thing as too much (non-redundant) unit testing. That approach, however, is not a testing approach, but rather a design approach which relies on working code and a more or less complete unit test suite with tests which drive every single decision made about the codebase.
In a non-TDD situation, automated tests should exercise every line of code you write (in particular Branch coverage is good), but even then there are exceptions - you shouldn't be testing vendor-supplied platform or framework code unless you know for certain that there are bugs which will affect you in that platform. You shouldn't be testing thin wrappers (or, equally, if you need to test it, the wrapper is not thin). You should be testing all core business logic, and it is certainly helpful to have some set of tests that exercise your database at some elemental level, although those tests will never work in the common situation where unit tests are run every time you compile.
Specifically with regard to database testing is intrinsically slow, and depending on how much logic is held in your database, quite difficult to get right. Typically things like dbs, HTML/XML documents & templating, and other document-ish aspects of a program are verified moreso than tested. The difference is usually that testing tries to exercise execution paths whereas verification tries to verify inputs and outputs directly.
To learn more about this I would suggest reading up on "Code Coverage". There is a lot of material available if you're curious about this.

What would you include in a 10 min Grok talk on Unit Testing

I'm soon to do a 10min Grok talk on Unit Testing at my company. I've been trying it myself, and feel that it can certainly bring benefits to the company. We already do WebInject testing in our dedicated QA team, But I want to try and sell unit testing to the devs.
So with only 10mins what would you cover and why?
we're a Microsoft Shop C# Web Apps, I've used NUnit in my experience.
Unit testing is all about confidence.
It allows you to be confident that your code is solid, and that other people can rely on it when they're writing their own parts of a system. If you can get across that unit testing will help to eliminate the trepidation that comes with the first release of a new system, I would hope that your audience will soon become very interested.
I'd start with a problem a lot of programmers might be familiar with: that is the fear of making a change to existing code because of the fear they might break something. How that prevents work from happening, or prevents it being done properly (because they're afraid to refactor) and so leads to having to rewrite everything every x years.
Unit Testing -> Refactoring -> Living Code.
Edit:
btw, I would Not lead with the 'all code without unit tests is legacy code' quote from Michael Feathers. It certainly made me feel defensive the first time I heard it. By the time people stop feeling affronted the 10 minutes will be over :-) (personally I think that quote is more true than it is helpful).
Here's a good format for a short talk on a technique X:
why you decided to try it X the first place
what you personally have gained from using X
what limitations you've noticed, things that X doesn't address
Don't "sell" or spend lots of time on the theory. Do prepare beforehand and point people to books, URLs of articles or tutorials that you think are most helpful. Those who are interested after your talk can look up the details on the Web.
Try to briefly talk about the aspect of Test Driven Development: write tests first and the interfaces as you go, then implement everything.
Maybe also about continuous integration, this means that as soon as you check something into your source control system, the project gets compiled and all the tests run so the developer knows immediately if he has done something wrong.
If there are any project managers in the audience, also be so fair to tell them that unit testing will make your project take 15-30 % more time, but it will be worth it in the long run.
You could mention that it will be a difficult learning curve, and it will feel like productivity is being impacted, but the benefits are worth it:
e.g. effectively the creation of an automated regression test suite, which in turn allows you to make bigger additions or modifications to existing without worrying that you are breaking some existing functionality.
Creation of production code will be slower, but this should be offset by the higher quality of the code, i.e. fewer bugs, which in the long run means overall higher productivity.
I think 10 minutes is enough to present a simple example of how unit testing can save you time.
Implement a class (you can TDD if you feel like it) and show how a unit test can catch a modification that breaks the class.
Also, you can highlight how you can be faster developing components if you test the isolatedly (i.e. you don't need to bring up your web application, log in, go to your functionality, test); you can just run your tests.
You might be able to perform this on a piece of code from your company- and maybe show how a unit test might have caught a bug you have had recently.
If you give a demonstration, do it on a working piece of code from a project that everyone is familiar with. Avoid contrived examples. Books on TDD are already full of them, and they don't really sell how TDD can work for a real project.
For the love of god, emphasize that unit tests are for testing "units" of logic. I hate looking at a QA suite of NUnit tests that nobody expected to have to maintain, where each "unit test" tests valid outputs for 150 (binary) input files and then shits itself if one fails, without telling you which one.
I would demonstrate:
The confidence it gives in code you produce
The confidence it gives when you change code because it passes the unit tests
The benefits of code coverage, no more "Oh that else statement was never tested!"
The benefits of running unit tests per each build on a CI platform like Hudson
FWIW we run the crappy visual studio testing via MSTEST on our Hudson box and I've got an xslt that Hudson uses to convert the results to the nunit format so Hudson can decipher them. Just putting that out there in case they want you to stick with a Microsoft testing platform.
Accountability, as highlighted by Kent Beck, is another trait that unit testing facilitates. Listen to his podcast at IT Conversations. (His point on accountability starts at 30:34.)
From a business perspective, you may want to highlight the fact that unit tests can "de-risk" any changes you make to your code. Once you have a suite of unit tests, you can make changes to the code base and know what breaks and what doesn't.
I might not be a bad idea to go over user testing. If you have a good set of tests, you can bring failing tests to the users after you make changes to have them validate that the new results are correct. Additionally, you can streamline requirements gathering if you have the users write new unit test definitions for you. They don't need to be able to code, but they do need to be able to give you the appropriate inputs and expected outputs (otherwise how would they know if the changes they asked for were working?).
Visual Studio has a pretty nice set of tools for unit testing, so an example or two may go a long way toward giving your group an idea of what unit testing is like in practice.
Wear this t-shirt ;-)
Well prepared live demo:
Find a "bug" in your "application"
Write a unit test that covers this bug.
Fix this bug
Show, your code is green.
So you can prove, that there's no way, that this bug will occur once more!
Another way to do this:
Propose an issue that can be solved by creating an algorithm. Something relatively simple, of course. Next, code this algorithm in a DLL project. Try to sneak in some weaknesses (i <= array.Length is always a good one). Next, ask them how they would test this DLL.
Most devs run their apps to test them. But you can't run a DLL. You might get some suggesting you create a console app to create methods that exercise the algorithm. Show them how you can craft unit tests to do this.
Have a good set of resources for follow up/self directed learning:
the pragmatic unit test for java/c# are good books on the subject
the kent beck paper on unit testing
links to any larger samples using the testing framework of choice

What not to test when it comes to Unit Testing?

In which parts of a project writing unit tests is nearly or really impossible? Data access? ftp?
If there is an answer to this question then %100 coverage is a myth, isn't it?
Here I found (via haacked something Michael Feathers says that can be an answer:
He says,
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Again in same article he adds:
Generally, unit tests are supposed to be small, they test a method or the interaction of a couple of methods. When you pull the database, sockets, or file system access into your unit tests, they are not really about those methods any more; they are about the integration of your code with that other software.
That 100% coverage is a myth, which it is, does not mean that 80% coverage is useless. The goal, of course, is 100%, and between unit tests and then integration tests, you can approach it.What is impossible in unit testing is predicting all the totally strange things your customers will do to the product. Once you begin to discover these mind-boggling perversions of your code, make sure to roll tests for them back into the test suite.
achieving 100% code coverage is almost always wasteful. There are many resources on this.
Nothing is impossible to unit test but there are always diminishing returns. It may not be worth it to unit test things that are painful to unit test.
The goal is not 100% code coverage nor is it 80% code coverage. A unit test being easy to write doesn't mean you should write it, and a unit tests being hard to write doesn't mean you should avoid the effort.
The goal of any test is to detect user visible problems in the most afforable manner.
Is the total cost of authoring, maintaining, and diagnosing problems flagged by the test (including false positives) worth the problems that specific test catches?
If the problem the test catches is 'expensive' then you can afford to put effort into figuring out how to test it, and maintaining that test. If the problem the test catches is trivial then writing (and maintaining!) the test (even in the presence of code changes) better be trivial.
The core goal of a unit test is to protect devs from implementation errors. That alone should indicate that too much effort will be a waste. After a certain point there are better strategies for getting correct implementation. Also after a certain point the user visible problems are due to correctly implementing the wrong thing which can only be caught by user level or integration testing.
What would you not test? Anything that could not possibly break.
When it comes to code coverage you want to aim for 100% of the code you actually write - that is you need not test third-party library code, or operating system code since that code will have been delivered to you tested. Unless its not. In which case you might want to test it. Or if there are known bugs in which case you might want to test for the presence of the bugs, so that you get a notification of when they are fixed.
Unit testing of a GUI is also difficult, albeit not impossible, I guess.
Data access is possible because you can set up a test database.
Generally the 'untestable' stuff is FTP, email and so forth. However, they are generally framework classes which you can rely on and therefore do not need to test if you hide them behind an abstraction.
Also, 100% code coverage is not enough on its own.
#GarryShutler
I actually unittest email by using a fake smtp server (Wiser). Makes sure you application code is correct:
http://maas-frensch.com/peter/2007/08/29/unittesting-e-mail-sending-using-spring/
Something like that could probably be done for other servers. Otherwise you should be able to mock the API...
BTW: 100% coverage is only the beginning... just means that all code has actually bean executed once.... nothing about edge cases etc.
Most tests, that need huge and expensive (in cost of resource or computationtime) setups are integration tests. Unit tests should (in theory) only test small units of the code. Individual functions.
For example, if you are testing email-functionality, it makes sense, to create a mock-mailer. The purpose of that mock is to make sure, your code calls the mailer correctly. To see if your application actually sends mail is an integration test.
It is very useful to make a distinction between unit-tests and integration tests. Unit-tests should run very fast. It should be easily possible to run all your unit-tests before you check in your code.
However, if your test-suite consists of many integration tests (that set up and tear down databases and the like), your test-run can easily exceed half an hour. In that case it is very likely that a developer will not run all the unit-tests before she checks in.
So to answer your question: Do net unit-test things, that are better implemented as an integration test (and also don't test getter/setter - it is a waste of time ;-) ).
In unit testing, you should not test anything that does not belong to your unit; testing units in their context is a different matter. That's the simple answer.
The basic rule I use is that you should unit test anything that touches the boundaries of your unit (usually class, or whatever else your unit might be), and mock the rest. There is no need to test the results that some database query returns, it suffices to test that your unit spits out the correct query.
This does not mean that you should not omit stuff that is just hard to test; even exception handling and concurrency issues can be tested pretty well using the right tools.
"What not to test when it comes to Unit Testing?"
* Beans with just getters and setters. Reasoning: Usually a waste of time that could be better spent testing something else.
Anything that is not completely deterministic is a no-no for unit testing. You want your unit tests to ALWAYS pass or fail with the same initial conditions - if weirdness like threading, or random data generation, or time/dates, or external services can affect this, then you shouldn't be covering it in your unit tests. Time/dates are a particularly nasty case. You can usually architect code to have a date to work with be injected (by code and tests) rather than relying on functionality at the current date and time.
That said though, unit tests shouldn't be the only level of testing in your application. Achieving 100% unit test coverage is often a waste of time, and quickly meets diminishing returns.
Far better is to have a set of higher level functional tests, and even integration tests to ensure that the system works correctly "once it's all joined up" - which the unit tests by definition do not test.
Anything that needs a very large and complicated setup. Ofcourse you can test ftp (client), but then you need to setup a ftp server. For unit test you need a reproducible test setup. If you can not provide it, you can not test it.
You can test them, but they won't be unit tests. Unit test is something that doesn't cross the boundaries, such as crossing over the wire, hitting database, running/interacting with a third party, Touching an untested/legacy codebase etc.
Anything beyond this is integration testing.
The obvious answer of the question in the title is You shouldn't unit test the internals of your API, you shouldn't rely on someone else's behavior, you shouldn't test anything that you are not responsible for.
The rest should be enough for only to make you able to write your code inside it, not more, not less.
Sure 100% coverage is a good goal when working on a large project, but for most projects fixing one or two bugs before deployment isn't necessarily worth the time to create exhaustive unit tests.
Exhaustively testing things like forms submission, database access, FTP access, etc at a very detailed level is often just a waste of time; unless the software being written needs a very high level of reliability (99.999% stuff) unit testing too much can be overkill and a real time sink.
I disagree with quamrana's response regarding not testing third-party code. This is an ideal use of a unit test. What if bug(s) are introduced in a new release of a library? Ideally, when a new version third-party library is released, you run the unit tests that represent the expected behaviour of this library to verify that it still works as expected.
Configuration is another item that is very difficult to test well in unit tests. Integration tests and other testing should be done against configuration. This reduces redundancy of testing and frees up a lot of time. Trying to unit test configuration is often frivolous.
FTP, SMTP, I/O in general should be tested using an interface. The interface should be implemented by an adapter (for the real code) and a mock for the unit test.
No unit test should exercise the real external resource (FTP server etc)
If the code to set up the state required for a unit test becomes significantly more complex than the code to be tested I tend to draw the line, and find another way to test the functionality. At that point you have to ask how do you know the unit test is right!
FTP, email and so forth can you test with a server emulation. It is difficult but possible.
Not testable are some error handling. In every code there are error handling that can never occur. For example in Java there must be catch many exception because it is part of a interface. But the used instance will never throw it. Or the default case of a switch if for all possible cases a case block exist.
Of course some of the not needed error handling can be removed. But is there a coding error in the future then this is bad.
The main reason to unit test code in the first place is to validate the design of your code. It's possible to gain 100% code coverage, but not without using mock objects or some form of isolation or dependency injection.
Remember, unit tests aren't for users, they are for developers and build systems to use to validate a system prior to release. To that end, the unit tests should run very fast and have as little configuration and dependency friction as possible. Try to do as much as you can in memory, and avoid using network connections from the tests.