Do you do unit tests for non production code? - unit-testing

I am interested in the following scenario specifically. Suppose you have team that writes production code and a team that writes automatic tests. The team that writes automatic tests has a dedicated framework intended to write the automatic tests. Should the testing team write unit tests for their framework although the framework is not used in production?

I've been in that situation and what I did is use the test suite for the production code also as a test suite for the testing framework. Presumably, all the features of the framework were actually used, so if tests failed without a change in production code, then there must be a problem in the testing framework.
It worked OK-ish - running those tests took much longer than having a dedicated test-testsuite would have, and sometimes I wouldn't run all of them and have a problem turn up on the production build server. Diagnosing such problems took much longer than it would have with a test-testsuite.
All in all, I never felt comfortable with it and would really recommend having dedicated tests for the test framework as well. From the point of view of the test-writing team, the testing framework is production code. And if the testing framework ever gets used by anyone else, whose test suites you don't have access to...

Hell yes!
TDD gives you an edge in developing, it's not just to please the customer. It allows you to write testable, reusable and modular code. I would unit test everything that has to work, expecially if you expect to change it often (refactor to add new features).

The testing team should do whatever that can increase their trust in the results delivered by their framework.
This includes testing, code reviews, quality standards, ...

Yes, if only to test that the framework generates sufficient test coverage.

This question reminds me of a story: Once upon a time there was a company. This company believes in automatic tests. This belief is so strong it leads to creating a group whose sole purpose is writing these automatic tests. All the most ardent believers are allowed to join this group. There was much rejoicing!
Then one day, it's discovered that this automatic testing group, despite its mission, doesn't use automatic tests for its own work. Stones are cast, yada yada.
I'm just saying... I would think that any testing framework would have pretty solid test coverage.

Related

Is test driven development a form of unit testing

Our company is in the process of improving the code quality and processes to adopt when delivering a piece of code. My question is concerned to unit testing and I wanted to gather information on the processes you adopt when you are asked to implement a functionality.
Is TDD a form of unit test. From what i understand in TDD, you write your test first (which fails), write your code and then run your test which should pass. It may be that the code will make external method call. But how are we suppose to know about the stubbing required when we are writing our test first?
When you are building your application prior release, what kind of test do you include in the build? Does the build run your integration test or does it run only your unit test?
Apart from TDD, do you write any other kind of test. Sorry if the question are slightly distorted. Your experience on how you undertake development is highly appreciated. Thanks
TDD can be a whole lot more than Unit Testing - so I'd say that Unit Testing is just a part of TDD. The methodology as a whole I think can include creating tests (expressing expectation/requirement of correct behaviour) on the result of any process in the software development. Be that writing code, build scripts, deployment scripts, database scripts, data import/export/transformation... whatever you need to do you should ask yourself, "How can I prove this has worked? Can I automate a test for that?"
As an example: something that is often overlooked because it falls out of scope of Unit Testing but is a very valid test, and one that is important to front-load in the development process is deployment.
If a software development cannot be easily deployed to the production environment without significant effort and change (to the software or environment architecture) it is important to know this up front, rather than a week before it has to go live. Once you have that process nailed, wouldn't it be nice to have a way of testing to make sure that it was correctly deployed?
When you understand that process - why not script and automate it? If you know the requirement is that it must be deployed, why not write a test for that before even doing it?
I've said it before but I'll say it again - the best resource I've found on the subject is Growing Object-Oriented Software, Guided by Tests - which is part of the Kent Beck Signature Series.
TDD is not about testing. TDD uses tests to drive the design of your code. TDD produces tests as a happy side-effect of designing your code by writing the tests first, but it's not about testing: it isn't a testing methodology and the purpose is not to produce tests.
Is test driven development a form of unit testing?
No. It is a design methodology.
From what I understand in TDD, you write your test first (which
fails), write your code and then run your test which should pass.
You're missing a very important step. You write your test first, you write your code until your test passes - and then you refactor. The tests permit you to refactor safely, ensuring that the desired behavior continues to work while you adjust your design. The tests also guide you to testable code, promoting smaller methods, shorter parameter lists, and overall much simpler design than other methodologies lead you to.
Apart from TDD, do you write any other kind of test?
When I do, it's usually a sign that I've failed to do TDD properly (but it certainly happens). We have both unit tests and user acceptance tests; both can be written prior to code, but sometimes our user acceptance tests are written later in the development cycle. They shouldn't be, but sometimes they are.
TDD is about design during the 5 minutes or so of your original Red-Green-Refactor loop. But it's arguably about testing forever after since there is nothing left to design - your TDD tests then become part of a perfect test harness to detect regressions caused by further developments. So yes, I guess you could say test driven development is a form of unit testing :)
But how are we suppose to know about the stubbing required when we are
writing our test first?
TDD often requires a (quick) prior modelling session where you flesh out the big picture classes your SUT will collaborate with.
However you need not go into the details of how these collaborators work. With mocks you basically apply wishful thinking that their implementations will behave correctly when you have TDD'd them at some point later, so for now you can just concentrate on the SUT.
When you are building your application prior release, what kind of
test do you include in the build? Does the build run your integration
test or does it run only your unit test?
When you practice Continuous Integration, your unit tests are supposed to be run each time so you can theoretically take any (non-failed) build and use it as a release build.
However, you may want to run automated or manual integration/acceptance tests as well before releasing your version. GUIs for instance, are usually not easily unit testable so acceptance/integration testing is a good way to track bugs in them.
You have several questions here, ill try to address them in a logical order
Is TDD a form of unit testing?
Id say "yes", in the sense it creates unit tests, even if it isnt the only benefit of using TDD. On the topic stressed by commentators, but not mentioned in your question: TDD not only ensures test coverage and documentiation (good tests are one of the best form of low level code documentation). Using TDD forces you to make certain design decisions, usually improving the overall app design.
Do You write other tests?
Well, I don't write any other unit tests. The point of TDD is the development of the code parallel to the development of the tests. By writing software in a cycle - single test, only enough code to pass it, you're sure that your tests document all the functionality and behaviour you require from your code and you make sure that the code is testable (you have to write it that way doing TDD). There should be no need for additional unit tests
There are other kinds of tests that you should use tho. Integration tests come to mind first, but there are other, like acceptance tests. If you have those automated, you will have it easier on you. Its not you who should be writing acceptance tests - it should be your customer/stakeholder, and You should be helping him on the technical part of writing them. You may be interested in Fitnesse http://fitnesse.org/ - its a tool that helps non-technical people build acceptance tests.
About the stubbing?
Its kind of difficult to discuss this without concrete examples. All i can say right now is - just write the code one test at a time. If you do so, there are chances you wont encounter a situation where you have a complicated class and think about how to stub around its complex dependencies.
What tests should be included in the build?
Id say - all of them, if it is possible!

When should you stop unit testing

I can often identify plenty of areas which are nicely encapsulated and easily unit tested, but I also find that a lot of code where unit testing doesn't really seem to work as well - typically data access and user interface. No matter what unit testing "techniques" I try I tend to find that in these places it's not only a lot of effort to create functioning unit tests, but these tests tend to be very fragile and don't really test very much.
At what point do you stop and decide that the benefits of unit testing isn't worth the cost?
When you can provide better value by doing something else.
I tend to test only the model and data persistence. testing the model is mandatory. UI (desktop application, webapps, command-line interface, etc) is hard to test, so I write test for it only in rare circustances.
Usually I only test the model and the controller. Unit test are hard to apply for UI, usually i prefer manually test the app.
If creating these test cost more time than manually test everytime you might have a regression, then the test is useless (easy to say, but not to evaluate...).
If you need to cut out testing, cut out integration or end-to-end testing. Misko Hevery at Google explains it really well here.
"Unit Testing gives you more bang for
your buck"
is the best quote to come out of his article.
Other than that, when you have decent code coverage and are handling a few edge cases of your code then its a good time to stop unit testing.
If you have the access to some live data you can use it to unit test. also u can use data generators and random data. Unit tests only give you some level of confidence that it wont create problems in the future. If you are confident with your testing you can discontinue unit tests
I like the title of the question. Apart from that I think it is a dupe of
Is there such a thing as excessive unit testing?
I would say when you've gained an acceptable level of confidence. Also, like for my project at work, we are under such tight time requirements that I really have to only test certain parts of code (not all of it) just to give me a good confidence level.
As far as data access testing goes have you tried mock tests to simulate responses.
The basic rule of thumb i'd follow is if the effort to build the unit test is more than the effort to repeatedly manually test the feature by human work.
If you look at the Test projects in the Visual Studio Team edition for Testers, there is such an item called a "Manual Test" which is essentially an instruction document to tell a human how to carry out the test and manually pass it. Certain things, like you mention UI testing, or code to workaround obscurely odd or buggy hardware behaviour in the underlying framework or OS or driver, are better verified by human eyes.
If you are using TDD, then you stop unit testing when all the tests in the test list have succeeded.
Otherwise, you stop unit testing when the cost of finding more bugs through unit testing exceeds the cost of finding them through your QA process. and when you've reached an acceptable level of code coverage through the combination of all tests.
When there isn't time in the project plan for it and time is being spent finding ways of testing rather than working towards the goal of the project.

What not to test when it comes to Unit Testing?

In which parts of a project writing unit tests is nearly or really impossible? Data access? ftp?
If there is an answer to this question then %100 coverage is a myth, isn't it?
Here I found (via haacked something Michael Feathers says that can be an answer:
He says,
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Again in same article he adds:
Generally, unit tests are supposed to be small, they test a method or the interaction of a couple of methods. When you pull the database, sockets, or file system access into your unit tests, they are not really about those methods any more; they are about the integration of your code with that other software.
That 100% coverage is a myth, which it is, does not mean that 80% coverage is useless. The goal, of course, is 100%, and between unit tests and then integration tests, you can approach it.What is impossible in unit testing is predicting all the totally strange things your customers will do to the product. Once you begin to discover these mind-boggling perversions of your code, make sure to roll tests for them back into the test suite.
achieving 100% code coverage is almost always wasteful. There are many resources on this.
Nothing is impossible to unit test but there are always diminishing returns. It may not be worth it to unit test things that are painful to unit test.
The goal is not 100% code coverage nor is it 80% code coverage. A unit test being easy to write doesn't mean you should write it, and a unit tests being hard to write doesn't mean you should avoid the effort.
The goal of any test is to detect user visible problems in the most afforable manner.
Is the total cost of authoring, maintaining, and diagnosing problems flagged by the test (including false positives) worth the problems that specific test catches?
If the problem the test catches is 'expensive' then you can afford to put effort into figuring out how to test it, and maintaining that test. If the problem the test catches is trivial then writing (and maintaining!) the test (even in the presence of code changes) better be trivial.
The core goal of a unit test is to protect devs from implementation errors. That alone should indicate that too much effort will be a waste. After a certain point there are better strategies for getting correct implementation. Also after a certain point the user visible problems are due to correctly implementing the wrong thing which can only be caught by user level or integration testing.
What would you not test? Anything that could not possibly break.
When it comes to code coverage you want to aim for 100% of the code you actually write - that is you need not test third-party library code, or operating system code since that code will have been delivered to you tested. Unless its not. In which case you might want to test it. Or if there are known bugs in which case you might want to test for the presence of the bugs, so that you get a notification of when they are fixed.
Unit testing of a GUI is also difficult, albeit not impossible, I guess.
Data access is possible because you can set up a test database.
Generally the 'untestable' stuff is FTP, email and so forth. However, they are generally framework classes which you can rely on and therefore do not need to test if you hide them behind an abstraction.
Also, 100% code coverage is not enough on its own.
#GarryShutler
I actually unittest email by using a fake smtp server (Wiser). Makes sure you application code is correct:
http://maas-frensch.com/peter/2007/08/29/unittesting-e-mail-sending-using-spring/
Something like that could probably be done for other servers. Otherwise you should be able to mock the API...
BTW: 100% coverage is only the beginning... just means that all code has actually bean executed once.... nothing about edge cases etc.
Most tests, that need huge and expensive (in cost of resource or computationtime) setups are integration tests. Unit tests should (in theory) only test small units of the code. Individual functions.
For example, if you are testing email-functionality, it makes sense, to create a mock-mailer. The purpose of that mock is to make sure, your code calls the mailer correctly. To see if your application actually sends mail is an integration test.
It is very useful to make a distinction between unit-tests and integration tests. Unit-tests should run very fast. It should be easily possible to run all your unit-tests before you check in your code.
However, if your test-suite consists of many integration tests (that set up and tear down databases and the like), your test-run can easily exceed half an hour. In that case it is very likely that a developer will not run all the unit-tests before she checks in.
So to answer your question: Do net unit-test things, that are better implemented as an integration test (and also don't test getter/setter - it is a waste of time ;-) ).
In unit testing, you should not test anything that does not belong to your unit; testing units in their context is a different matter. That's the simple answer.
The basic rule I use is that you should unit test anything that touches the boundaries of your unit (usually class, or whatever else your unit might be), and mock the rest. There is no need to test the results that some database query returns, it suffices to test that your unit spits out the correct query.
This does not mean that you should not omit stuff that is just hard to test; even exception handling and concurrency issues can be tested pretty well using the right tools.
"What not to test when it comes to Unit Testing?"
* Beans with just getters and setters. Reasoning: Usually a waste of time that could be better spent testing something else.
Anything that is not completely deterministic is a no-no for unit testing. You want your unit tests to ALWAYS pass or fail with the same initial conditions - if weirdness like threading, or random data generation, or time/dates, or external services can affect this, then you shouldn't be covering it in your unit tests. Time/dates are a particularly nasty case. You can usually architect code to have a date to work with be injected (by code and tests) rather than relying on functionality at the current date and time.
That said though, unit tests shouldn't be the only level of testing in your application. Achieving 100% unit test coverage is often a waste of time, and quickly meets diminishing returns.
Far better is to have a set of higher level functional tests, and even integration tests to ensure that the system works correctly "once it's all joined up" - which the unit tests by definition do not test.
Anything that needs a very large and complicated setup. Ofcourse you can test ftp (client), but then you need to setup a ftp server. For unit test you need a reproducible test setup. If you can not provide it, you can not test it.
You can test them, but they won't be unit tests. Unit test is something that doesn't cross the boundaries, such as crossing over the wire, hitting database, running/interacting with a third party, Touching an untested/legacy codebase etc.
Anything beyond this is integration testing.
The obvious answer of the question in the title is You shouldn't unit test the internals of your API, you shouldn't rely on someone else's behavior, you shouldn't test anything that you are not responsible for.
The rest should be enough for only to make you able to write your code inside it, not more, not less.
Sure 100% coverage is a good goal when working on a large project, but for most projects fixing one or two bugs before deployment isn't necessarily worth the time to create exhaustive unit tests.
Exhaustively testing things like forms submission, database access, FTP access, etc at a very detailed level is often just a waste of time; unless the software being written needs a very high level of reliability (99.999% stuff) unit testing too much can be overkill and a real time sink.
I disagree with quamrana's response regarding not testing third-party code. This is an ideal use of a unit test. What if bug(s) are introduced in a new release of a library? Ideally, when a new version third-party library is released, you run the unit tests that represent the expected behaviour of this library to verify that it still works as expected.
Configuration is another item that is very difficult to test well in unit tests. Integration tests and other testing should be done against configuration. This reduces redundancy of testing and frees up a lot of time. Trying to unit test configuration is often frivolous.
FTP, SMTP, I/O in general should be tested using an interface. The interface should be implemented by an adapter (for the real code) and a mock for the unit test.
No unit test should exercise the real external resource (FTP server etc)
If the code to set up the state required for a unit test becomes significantly more complex than the code to be tested I tend to draw the line, and find another way to test the functionality. At that point you have to ask how do you know the unit test is right!
FTP, email and so forth can you test with a server emulation. It is difficult but possible.
Not testable are some error handling. In every code there are error handling that can never occur. For example in Java there must be catch many exception because it is part of a interface. But the used instance will never throw it. Or the default case of a switch if for all possible cases a case block exist.
Of course some of the not needed error handling can be removed. But is there a coding error in the future then this is bad.
The main reason to unit test code in the first place is to validate the design of your code. It's possible to gain 100% code coverage, but not without using mock objects or some form of isolation or dependency injection.
Remember, unit tests aren't for users, they are for developers and build systems to use to validate a system prior to release. To that end, the unit tests should run very fast and have as little configuration and dependency friction as possible. Try to do as much as you can in memory, and avoid using network connections from the tests.

Using unit tests as a "functionality contract"

Unit tests are often deployed with software releases to validate the install - i.e. do the install, run the tests and if they pass then the install is good.
I'm about to embark on a project that will involve delivering prototype software library releases to customers. The unit tests will be delivered as part of each release and in addition to using the tests to validate the install, I plan on using the unit tests that test the API as a "contract" for how the release should be used. If the user uses the release in a similar manner to how it is used by the unit tests then great. If they use it some other way then all bets are off.
Has anybody tried this before? Any thoughts on whether this is a good/bad idea?
Edit: To highlight a good point raised by ChrisA and Dan in replies below, the "unit tests that test the API" are better called the integration tests and their intent is to exercise the API and the software to demonstrate the functionality of the software from a customer perspective.
Sounds like a good idea to me. I (we all?) routinely use unit tests internally to do just that. In using my unit tests to validate that I haven't broken anything I'm also implicitly verifying that my API contract hasn't changed. It seems like a natural usage of unit tests to deploy them in the fashion you're talking about.
Agile methodologies say: Tests are specifications, so this is a very good idea.
I fully expect to be flamed for this, but I don't understand how a set of unit tests proves anything at all about the kind of things a customer cares about, namely whether the application meets his business requirements.
Here's an example: I've just finished converting a chunk of code to fix a big mistake we made. It was a classic case of over-engineering, and the changes have touched about a dozen windows forms and about as many classes.
It's taken me a couple of days, it's now a lot simpler, we gained some features for free, and we lost a ton of code that did stuff that we now know we never really needed.
Every single one of those forms worked perfectly before. The public methods did exactly what they needed to do, and the underlying data accesses were just fine.
So any unit test would have passed.
Except, sadly, they did the wrong thing - which we didn't realise, except in retrospect. It's as if we'd built a prototype and only after trying to use it, realised that it wasn't right.
So now we have a leaner, meaner, fitter application.
But the things that were wrong, were wrong at a level where unit tests could never have revealed them, so I'm just not understanding how shipping a set of unit tests with an install does anything except give a false sense of security.
Maybe I'm not understanding something, but it seems to me that unless the thing that is shipped functions at the same level as the tests supplied, they prove nothing.
It's actually a pretty good idea, and extremely pleasant as an API user.
This technique can actually also be used the other way round : when you're using a "legacy" API, you can use unit tests to document the way you think the API behaves and to validate that it actually behaves as planned.
If you are releasing a code library, this sounds great.
If you are releasing an ordinary software product with which your users will interact only via a GUI, your unit tests may not be working at the same level of abstraction or may not the most useful tool to assess the behaviour of your product. A really good user manual (yes, this is possible) might be better for that.
If you're interested in providing a set of specifications with your code, perhaps you should investigate some of the behavior-driven development tools (nbehave, jbehave, rspec, etc.). These frameworks provide support for describing your tests in given/when/then syntax and outputting formatted results that are in a natural language. See nbehave for an example of a BDD tool for .NET. You can find an excellent description of BDD here
Another option may be for you to write tests using an acceptance testing framework such as fit or fitnesse (or the java-only concordion) and deliver these acceptance tests with the code. Both fit/fitnesse and concordion allow specification of the tests in plain HTML or even Word documents.
The benefit of either approach (BDD or acceptance testing frameworks) is that the results the user sees are more human-readable and understandable.
Tests will check requirements.
Requirements define functionality
=> Tests will check functionality.
The problem is, that only functionality can be checked that can be covered by unit tests. Integration or whole system tests won't work.
Otherwise, it's the main approach of TDD to check functionality via unit tests.
Meszaros calls this "Tests as documentation"

Getting Started with Unit Testing

Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Rytmis
My question is, what are the current "best practices" in terms of tools as well as when and where to use unit testing as part of your daily coding?
Lets try to be somewhat language agnostic and cover all the bases.
Ok here's some best practices from some one who doesn't unit test as much as he should...cough.
Make sure your tests test one
thing and one thing only.
Write unit tests as you go. Preferably before you write the code you are testing against.
Do not unit test the GUI.
Separate your concerns.
Minimise the dependencies of your tests.
Mock behviour with mocks.
You might want to look at TDD on Three Index Cards and Three Index Cards to Easily Remember the Essence of Test-Driven Development:
Card #1. Uncle Bob’s Three Laws
Write no production code except to pass a failing test.
Write only enough of a test to demonstrate a failure.
Write only enough production code to pass the test.
Card #2: FIRST Principles
Fast: Mind-numbingly fast, as in hundreds or thousands per second.
Isolated: The test isolates a fault clearly.
Repeatable: I can run it repeatedly and it will pass or fail the same way each time.
Self-verifying: The Test is unambiguously pass-fail.
Timely: Produced in lockstep with tiny code changes.
Card #3: Core of TDD
Red: test fails
Green: test passes
Refactor: clean code and tests
The so-called xUnit framework is widely used. It was originally developed for Smalltalk as SUnit, evolved into JUnit for Java, and now has many other implementations such as NUnit for .Net. It's almost a de facto standard - if you say you're using unit tests, a majority of other developers will assume you mean xUnit or similar.
A great resource for 'best practices' is the Google Testing Blog, for example a recent post on Writing Testable Code is a fantastic resource. Specifically their 'Testing on the Toilet' series weekly posts are great for posting around your cube, or toilet, so you can always be thinking about testing.
The xUnit family are the mainstay of unit testing. They are integrated into the likes of Netbeans, Eclipse and many other IDEs. They offer a simple, structured solution to unit testing.
One thing I always try and do when writing a test is to minimise external code usage. By that I mean: I try to minimise the setup and teardown code for the test as much as possible and try to avoid using other modules/code blocks as much as possible. Well-written modular code shouldn't require too much external code in it's setup and teardown.
NUnit is a good tool for any of the .NET languages.
Unit tests can be used in a number of ways:
Test Logic
Increase separation of code units. If you can't fully test a function or section of code, then the parts that make it up are too interdependant.
Drive development, some people write tests before they write the code to be tested. This forces you to think about what you want the code to do, and then gives you a definite guideline on when you have acheived that.
Don't forget refactoring support. ReSharper on .NET provides automatic refactoring and quick fixes for missing code. That means if you write a call to something that does not exist, ReSharper will ask if you want to create the missing piece.