Is Unit Testing Suitable for BPM Development? - unit-testing

I'm currently working on a large BPM project at work which uses the Global 360 BPM tool-set called Process 360. Just to give some background; this product works like a lot of other BPM solutions in that you design multiple "process maps" which define the flow of a particular business process you're trying to model, and each process map consists of multiple task nodes connected together which perform particular functions (calling web-services etc).
Currently we're experiencing some pretty serious issues during QA phases of our releases because there isn't any way provided by the tool-set to automate testing of the process map routes. So when a large and complex process is developed and handed over to our test team there are often a large number of issues which crop up. While obviously you'd expect some issues to come out of QA, I can't help the feeling that a lot of the bugs etc could have been spotted during development if we had some sort of automated testing framework which we could use to build up a set of unit tests which proved the various routes in the process map(s).
At the moment the only real development testing that occurs is more akin to functional testing performed by the developers which is documented as a set of manual steps per test-case. The problem with this approach is that it's very time consuming for the developers to run manually, and because of this, is also relatively prone to error. Also; because we're usually on a pretty tight schedule, the tests are often not executed often enough to spot issues early.
As I mentioned earlier; there isn't a way provided by the current tool-set to perform this sort of automated testing. Which actually got me thinking why? Being very new to the whole BPM scene my assumption was that this was just a feature lacking in the product, but I also wonder whether "unit testing" just isn't done in the BPM world traditionally? Perhaps it just isn't suited well to this sort of work?
I'd be interested to know if anyone else has ever encountered these sorts of issues, and also what - if anything - can be done to improve things.

I have seen something about that, though not Global 360 related : using bpelunit for testing processes
I develop a workflow tool and there is an increased demand for opening the test tools used for testing the engine to end-users.

I've done "unit" testing with K2.net 2003, another commercial BPM. I'd really call this integration testing, because it requires a test server and it's relatively slow. However, it is automated.
There's a good discussion of this in the book Professional K2 blackpearl (it applies to K2.net 2003 as well).
In order to apply it to your platform, the tool set has to have an API that permits starting process instances, obtaining work items, completing work items, etc. You write tests using any supported language (I used C#) and a testing framework (I used NUnit). If the API supports synchronous calls, this is easier to do. For each test:
Start the process under test
Progress the work item to a decision point
Set process instance data appropriately
Complete the work item
Assert that the work item is now at the expected activity
Delete or complete the process instance
Base test classes or helper methods can make this easier. You could even write a DSL for testing maps.
Essentially you want full "test coverage" of the process/map - test every decision point and insure that the correct branch is taken.

There are two aspects of BPM that are related yet not identical.
There's BPM that tool and technology vendors advocate for which is all about features.
There's also BPM that Enterprise Architects advocate for which is all about establishing Centers of Excellence.
The former is where a company buys some software.
The latter is where a company makes systemic and inherent changes to the behavior of their IT workers.
The former is supposed to be in the service of the latter but that isn't necessarily so. Acquiring the former is necessary but not sufficient for achieving the latter.
I don't know just how well that Global 360 supports what is known as Test Driven Development but JBoss jBPM does provide some tool support for easily writing JUnit tests.
However, the tool can't and won't force developers to write them or to embrace TDD principles.

Related

Can unit tests be environment-dependent?

I have this very generic question about unit test. Should we write environment dependent unit tests? The unit tests may depend on file system, a specific connection or presence of a file.
I am not sure if there is a answer to it. But I found it takes time to decouple the unit tests from the environment. Does that mean it is ok to create environment dependent unit tests? Or it always points out that there is a design problem when we need to write environment dependent unit tests?
TL;DR
Whether environment dependent tests are indeed a design flaw or not is strongly dependent on your application, programming language and specific problem. Keep in mind, that tests are never "complete" or "perfect" and try not to achieve perfection at the expense of more and more time. Using modern frameworks, making your tests re-usable and mocking your environment can take a lot of the pain out of testing.
The longer answer
IMHO, there's no perfect answer to this question. Unit Tests should be as generic and re-usable as possible without trying to cover environmental edge cases at any price.
Example: Most environments should have an AMD64 processor nowadays (not regarding phones), so in most cases it is not needed to create unit tests for X32 or ARM environments, ...
Your question is technology agnostic, so it's hard to tell, if the need for environment dependent unit tests is a design flaw. If you're using Java (see below), I'd rather think it is, using C++, it might not be...
Some rules of thumb I can give from my experience (and therefore a little opinion-based):
Follow the pareto principle
Don't try to achieve 100% of what is possible in theory, but concretate on the about 80% that are also likely to be encountered. Let's say, you expect your application to run on Windows Clients. So possibly your tests should cover the environmental characteristics of Windows 10 and 11. Older Windows versions, Linux, Mac OS, ... should not be needed.
Or, if your application mainly has as Spanish audience, you should not care so much about how it can deal with chinese characters or canadian data protection rules.
Exclude environmental specialties as much as sane and possible when coding your application
If you can manage to make your application code mostly independent from your environment, the same will "automatically" apply to your tests. (Also, see the point about not re-inventing the wheel below)
As you didn't ask for a specific programming language, I'll refer to Java here, but similar concepts exist in most modern languages.
E. g., Java abstracts very much from your environment, so you can use File.separator, if you don't know if your application will be running on Windows (\ as separator) or Linux (/).
Not using OS-specific APIs (like Java NI) avoids environment-related problems as well. If not dealing closely with hardware access, there shouldn't be any need to use them.
Adapt your tests
Like the "real" application, usually your tests are never "done". A new problem with your software arises? Fix it and add a specific test case for this kind of bug (trying to be quite generic). If it was a problem related to the 20% of cases or environments you did not consider before, be happy - now you've got a little hold of them as well.
Package your test data with your application
Referring to Java again, you could place your test files in the resources folder and load them dynamically while testing - not need to bother the real file system of the testing machine.
Also, if you're using a database, you could spin up an in-memory instance (like H2) during testing. This will not exactly mirror your production database system, but makes testing very easy without the danger of breaking something. Anyway, database abstraction is also very sophisticated nowadays, so despite of some edge cases, you should not see any difference in behaviour if you're using a good database abstraction layer (which itself in turn make you less environment-dependent).
Make your tests re-usable
Using a tool like Maven or Gradle, you can make your tests run on a bigger variety of environments, if they are packaged as just pointed out. The need for a specific test environment will decrease.
Mocking is your friend
As with the "fake" database mentioned before, you can mock a lot of things. There are frameworks like Mockito allowing white box and black box testing with mocked objects. "Helpers" like WireMock allow you to even mock the answer you get from an external service. And there are a lot more possibilities out there!
Don't re-invent the wheel
If you use one of todays advanced application frameworks (like Spring, Symfony, Boost, ...) they'll do a lot of the work for you if you make proper use of them. You'll have to read the docs and burrow into them, but then you will realize how much easier they can make your life. This does not apply to frameworks only, but also to libraries, services, components, ... And the good thing is, all of that will also come into place when it's about testing! Possibly most of your environmental dependencies might vanish if you use a good software foundation.

Getting a custom built financial software system under test automation

I am needing to get a financials heavy software system under test automation. It has been in development and production for a couple years. I've been an advocate for developers building testing test automation into their coding practices for over 18 years, but this is a unique challenge. Beyond establishing test automation standards and training for the technologies we are using, I need to figure out the most effective strategy for prioritizing and implementing automated tests into the existing system that will improve confidence/stability/reliability, while continuing with rapid development of future features. There are many teams working on this solution.
My primary decision point is whether I should start the focus on internals of the financial transactions, processes, rules - with unit and integration tests under the hood - or start with use cases that tell full stories. There are two things that are competing for priority, making sure requirements have been implemented correctly vs. making sure what has already been implemented does NOT get broken by new code merges. I realize that these are not mutually exclusive, but time is of the essence and I need to move yesterday on implementing the highest value tests.
I would love to discuss with others experienced with test automation in systems that are heavy on financial transactions that are CRITICAL.
Since you will never catch up with the tests you should focus on new features and fixes. Whenever the code will be changed, try to identify which parts of the existing code will be touched and see if you can improve the test coverage for this parts before writing any new code. Do not aim for perfection, just try to get some tests running that would catch any serious regressions. Then try to TDD the new feature/fix.
Also check out "Working Effectively with Legacy Code" by Michael Feathers.

Test Driven Development (TDD) for User Interface (UI) with functional tests

As we know, TDD means "write the test first, and then write the code". And when it comes to unit-testing, this is fine, because you are limited within the "unit".
However when it comes to UI, writing functional tests beforehand makes less sense (to me). This is because the functional tests have to verify a (possibly long) set of functional requirements. This may usually span multiple screens (pages), preconditions like "logged in", "having recently inserted a record", etc.
According to Wikipedia:
Test-driven development is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations.
(Of course, Wikipedia is no "authority", but this sounds very logical.)
So, any thoughts, or better - experience, with functional tests-first for UI, and then code. Does it work? And is it "pain"?
Try BDD, Behavior Driven Development. It promotes writing specification stories which are then executed step by step stimulating the app to change it's state and verify the outcomes.
I use BDD scenarios to write UI code. Business requests are described using BDD stories and then the functionality is being written to turn the stories i.e. tests green.
The key to testing a UI is to separate your concerns - the behavior of your UI is actually different than the appearance of your UI. We struggle with this mentally, so as an exercise take a game like Tetris and imagine porting it from one platform (say PC) to another (the web). The intuition is that everything is different - you have to rewrite everything! But in reality all this is the same:
The rules of the game.
The speed of the blocks falling.
The logic for rows matching
Which block is chosen
And more...
You get the idea. The only thing that changes is how the screen is drawn. So separate how your UI looks from how it works. This is tricky, and usually can't be perfect, but it's close. My recommendation is to write the UI last. Tests will provide the feedback if your behavior is working, and the screen will tell if it looks right. This combination provides the fast feedback we're looking for from TDD without a UI.
TDD for functional tests makes sense to me. Write a functional test, see it failing, then divide the problem into parts, write a unit test for each part, write code for each part, see unit tests pass and then your functional test should pass.
Here is a workflow suggested in the book Test-Driven Development with Python by Harry Percival (available online for free):
P.S. You can automate your functional tests using e.g. Selenium. You can add them to the Continuous Integration cycle as well as unit tests.
ATDD is very useful if you have a very good grasp on how the UI should behave and also ensure that the cost of change to the functional tests that you build upfront is less.
The biggest problem with doing this, of course, is most often the UI is not fully speced out.
For example, if you are building a product, and are still doing rapid iterations of getting a UI that goes through a usability testing and the feedback incorporated, you do not want the baggage of fixing your functional tests with every small change to the UI.
Given that functional tests are generally slow, the feedback cycle is high and its very painful to keep them green with the changes to UI.
I work on a product team where we had this exact same decision to make. A colleague of mine has summarized our final approach very well here. (Disclaimer: Please ignore the tool specific details there.)
I've done Acceptance TDD with a UI. We would assert the common header and footer were used via xpath'ing for the appropriate ids. We also used xpath to assert the data appeared in the correct tag relative to whatever ids we used for basic layout and structure. We would also assert the output page is valid html 4.01 strict.
Programmatic UI tests is a salvation when you want to be confident your UI does what expected. The UI tests are closer to BDD (behavior driven development) than to TDD. But the terminology is cloudy and however you call 'em they are useful!
I have got rather good experience using cucumber. I use it to test flex applications but it's mostly used to test web apps. Click the link! The site have got really nice examples of methodology.
Advanced UI is not compatible with straight TDD.
Use XP practices to guide you.
It is unwise to slavishly follow TDD (or even BDD) for UI, it's worth thinking about the root goals of both practices. In particular, we want to avoid using general purpose TDD tools for UI. (If) They're simply not designed for that use case, you end up binding yourself to tests that you have to iterate on as rapidly as the UI code (I call this, UI test-lock).
That's not the purpose of TDD. We don't do it to go slower, we use to go fast (and NOT break things!).
We want to achieve a few things with TDD:
Code / Algorithmic Correctness
Minimal code to complete a test
Fast feedback
Architectural separation of concerns to the units (modular parts under test, no more, no less)
Formalize specification in a useful/usable way
To some extent, document what the module/system does (e.g. provide an example of use.)
We can achieve these goals in UI development, but how and where we apply practices is important to avoid test-lock.
Applying these principles to UI
What benefits can we get from testing UI? How can we avoid test-lock.
With usual TDD one major benefit is that we can think about the abstractions, and generally design the shape of a test subject, as we think of names, relationships etc.
For UI, much of what we want to do, especially if it's non-trivial, is only reasoned about effectively when we can see it manifested on screen.
Writing tests that describe what we expect to see in a UI, specifically what data, can be TDD tested. For example, if we have a view that should show an account balance, a test that expects it to appear in an on-screen element that can be addressed by some form of ID, is good. We will know our app/view is displaying content that's expected, in standard UI library elements.
If, on the other hand, we want to create a more complex UI and wish to apply TDD to it, we will have some problems. Chances are, we won't even know how to code it, if it's novel enough.
For this we would prototype and iterate with tools which give us fast feedback, but don't saddle us with any need to write tests first. When we reach a point where implementation becomes clear, we can then switch up to test first for the production code.
This keeps us fast. Designers and Product owners in the team can also see results, provide inputs, and tweaks.
The code should be well-formed. As soon as we identify small potential pieces which will make it to production code, we can migrate them to be under test. Using TCR or a similar method to add tests, or simply use the TDD method to re-write. Remember once it works, you can apply snapshot/record-playback testing, to keep regression errors at bay.
Do what works, keep it simple.
Addendum
Notes about the scope of UI discussed above
There will be a need to figure out how to make things work as expected in any advanced UI. There is also a likelihood that specs change and tweak in extremely arbitrary / capricious ways. We need to ensure any tests we apply to these test subjects are flexible, or extremely easy to re-generate based on a working model. (replay tests are gold for this.)
Good development practices, in a nutshell, code separation. Is going to help you ensure the code is at least testable, and simpler to debug, maintain and reason about.
Don't be a slave to ritual.
It doesn't make you correct. Just slower.

Testing... how do the pros do it, and what techniques can scale to single-person development?

I've been writing software for years, but have never mastered the art of testing. My typical testing includes thorough run-throughs on my machines, and then testing in various operating systems via VMware. Mainly a brute-force play-with-it-until-it-breaks-or-doesn't approach. Where possible I work on actual hardware, but this isn't always practical.
My question is twofold:
How do medium-sized professional development houses do their testing?
What common techniques or procedures (outside of unit testing) can apply to a developer team of one. I'm looking for practicality.
Thank you for your time and input.
Step 1: Unit Testing
Divide your software into components (which can be anything from single functions up to whole programs) and unit-test those components thoroughly, especially as relates to the API and behavior that the rest of the application can see. (Don't forget to check for failure modes too, but beware of binding too carefully to the exact nature of a failure; it's often good enough to just test for the presence of the right class of exception rather than its exact message.) Make sure those tests pass; you're honing them in on a specification of what the component should be doing. (Automated test running helps here, as does a CI system.) This is important because of…
Step 2: Integration Testing
Test that the compositions of components that make up the application work (this is integration testing). Ideally you'll only be finding bugs in the specifications of things at this point (hah!) and wherever you identify that a component is wrong despite passing its unit tests, that tells you that there is a bug. Whenever things fail to work together despite being told to do so, you've probably got a bug in your specs from the previous step so you typically resolve these things by adding more detail to your unit tests and fixing the components until they work.
Note that to make good integration, you want to keep this stage so that the integration itself is sufficiently simple that it is in the “Obviously No Bugs” class of programs instead of the larger “No Obvious Bugs” class. An integration framework like Spring or a scripting language can help a lot here (though with the latter you have to guard against creating components on the sly; if you create a component then admit it and make sure it has a proper usage contract and unit tests to ensure that it meets its contract).
Where you can, you can make components by composing others together; these higher-level components need to be unit tested as characterized in Step 1 above. This might sound like extra work – it probably is – but it does have the advantage of meaning that you can use automated tests for larger parts of the program. (Alas, it's harder to do all integration tests with an automated test tool; such things tend to work better doing unit tests where you can mock out all the irrelevant parts.) But this doesn't save you from…
Step 3: Acceptance Testing
This is where the overall application is tested to see if it actually does what is desired. This might be automatable, but usually isn't. This is level where you bring in users to let them see whether things are what they expected, though you might want to use internal testers a bit first. How easy this all is depends on the nature of the application.
Note also that user interfaces tend to spend more time in this step than the others, precisely because what makes for a good UI is difficult to impossible to pin down in algorithms (it relates much more to human psychology after all).
A final note: What I've written here sounds like testing is meant to be a laborious process that takes ages at the end of a project. It isn't! You can often get parts of an application done before others, do an integration of those parts (with mocks for the other bits) and test quite a bit of how acceptable this sub-application really is. Of course, when doing this take care to stop users from believing that everything is done; one way is to have dialog boxes that pop up and say things like “magic to happen here”. Silly but effective. :-)
For a small team unit tests or automatic integration tests are crucial. Because there are no hands and time for manual testing - the more you automate the better. This includes Continuous Integration.
Set up a separate 'beta' environment that is as close to your production environment as possible. Do most of your tests there - this way you will pick up all the things you've forgotten in your 'release plan'.
As a proffesional tester my suggestion is that you should have a healthy mix of automated and manual testing. The Examples below are in .net but it should be easy to find a tool for whatever technique you are using.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
My testing tool examples are Java based, but I will try to suggest tools which are ported to multiple languages or are language agnostic.
Use unit testing tools like junit (ported to a variety of languages). This will allow you to refactor your code safely. Most code bugs should result in the addition or correction of at least one test.
Use revision control, and setup an automated build environment that check out the code and builds the code. It should then run the automated test suite. If the application uses a database the build environment should have its own database. Use different code branched for production (released) and development code.
Use integration testing tools like HTTPunit or Synergy to test web applications. Tools of this type are basically language agnostic, but your may want to choose a tool which can be extended in the language(s) you are using. For non-web applications, there may not be an equivelent tool for your platform. You may also want to use a performance tool like JMeter.
These tools have some setup costs, but a quick payback. Overall development time may be the same or less than if you didn't use the tools.
Acceptance tests generally don't lend themselves to automated testing. Where they do, included them in the integrated testing. Get acceptance feedback as early and often as possible.
How do the pros do it? That all depends on who the 'Pro' is... There are dozens of different approaches to testing, and plenty of experts to tell you that their way is the one true way. Agile gurus will tell you a very different story from the waterfall gurus. The ISTBQ guys will tell you a very different from the Context-Driven guys.
Unfortunately there is no one true way, and you have to figure it out for yourself. Your approach to testing depends on too many factors. That's probably not very helpful, but you just need to be aware that any answer you get here will be only one option of many, and it may be completely inappropriate for your situation.
Personally, after several years in software testing, I have decided to align myself with the Context Driven school of software testing. See: http://www.context-driven-testing.com
Secondly, from your description of you current approach, that sounds a lot like exploratory testing to me. You may find this material interesting: satisfice.com/sbtm/
One thing you can do (combined with all the previous suggestions) is identify the risky and critical areas of your app and try to focus your testing efforts on these areas.

Forcing Unit Testing on Developers

First a little background. The company I work for writes web based software that is a hosted solution for our customers (ie ASP (Application Service Provider)). We are adopting agile practices such as Scrum and we execute sprints to build new features for our product.
I am a proponent of TDD (Test Driven Design), and as a part of what I deliver in a sprint I always write tests and I always get them integrated with the build (ie ccnet); however the other developers do not follow this practice and it is not enforced.
Is it a good practice to force a development group into providing unit tests as a part of what is delivered in a sprint?
Unless you are a position of authority, the best thing you can do is to convince them of the value of the test suite.
It's very difficult to get developers to see the light on this issue if they aren't seeing it done right.
Try to pair with another developer and show them the benefits and the clarity that comes from writing the tests FIRST. If you don't do this, they are likely to write all of their code, get it working, and then write tests. So, from their point of view, it will feel like simply an extra task that doesn't help them get things done.
Also keep in mind that people often do not understand how to write good tests. Even more, some do not know how to make use of tools like jmock, which can lead to them getting stuck and giving up on writing a test.
Forcing anything onto anybody is not a good practise in my view. I would show them the benefits of TDD at every opportunity I get. This should automatically get the rest of the team to voluntarily practise TDD.
You don't want lip-service unit testing, you want whole-hearted unit testing. That isn't something that can be forced. What you need to do is influence your teammates over time to see the benefits of unit testing and to develop a unit testing culture.
To start with you need to understand that different people change for different reasons. In Crossing the Chasm terms, visionaries will adopt new techniques because they are better, but pragmatists adopt new techniques either because they solve a problem/pain the currently have or because everyone else is adopting it.
Your mission then it to show how unit testing can solve a pain your team currently feels. As you win over people one-by-one eventually you can reach a tipping point where unit testing is the norm and everyone goes along with it. However if you can't tie unit testing to a pain your team feels then your efforts to convince them will likely fail.
Considering unit-testing improves code and software quality on a long term, I would say that, yes, it is good practice to have your developpers provide unit-tests -- be it part of some kind of sprint or not.
The two main barriers to unit-tests I've seen are :
the developpers don't get the point : "why would we write more code just to test ?"
"we don't have time to write unit-tests"
To answer the first point, you'll have to provide some sort of demonstration / formation, I suppose ; if you can get developpers to see why unit-tests are useful, they will like them, and use/develop them ; but they need to seen why those are useful : unless you are their boss, you cannot force people to develop unit-tests.
And, even if you are their boss, they will probably not do the best possible job if they are being forced : unit-testing is often done better if people understand why and how !
To answer the second point... Well, you obviously need to get your developpers some "special" time to developp unit-tests ; it can mean less time to do manual testing, btw.
Another thing is : it is hard to know "what to test", and "how to test" : you will need to explain / demonstrate that to your colleagues : some things cannot be tested, some things don't need to be, and some things are not "unit-testable" -- well, I suppose, unless you software is really well engineered ^^
I've gotten in that position on many jobs and contracts in the past, so I've finally gotten discouraged and embraced the darkness by advocating Development Driven Development.
I've found that when most managers embrace XP, they're embracing throwing out the documentation, not really doing TDD. Programmers on most teams are rewarded for quick hacks that get the defect out of their queue, and it's one manager in about ten who has the guts to stand up to senior management as an advocate of the overall quality of the product as opposed to the bottom-line-for-this-quarter way of doing business. After all, most software jobs that pay anything are corporate sponsored, and Freud proved that corporations are insane.
Or at least, he should have.
There is a great Joel on Software article on this called Getting Things Done When You're Only a Grunt. His strategies applied to united testing would be the following:
Just Do It
Most important, I think. Regularly write tests, do not make it appear as if you yourself see them as a minor part of development.
Harness the Power of Viral Marketing
If you see an error in someone else's code, write a test that triggers it and present both to him. Maybe he sees your point.
Create a Pocket of Excellence
Identify the team members who are open to the idea but not quite sure how to work with unit tests and set up a number of test classes until everyone of them knows how to do it. Then turn towards the other ones. Things are a lot easier to establish once you are not the only one anymore.
Neutralize The Bozos
There will be team members who are almost impossible to get to write tests. Maybe those should be dealt with by regularly breaking their code with a new commit - and then pointing out that since they don't have tests it was hard for you to notice.
it depends on your Version Control, but there are Version Control Software that enable you to run scripts before check-in or before merge to the Production branch. and it will not let the developer check in if the unit testing is failed.
First, talk to your manager. If he is convinced that testing is a good thing, add a coverage test to your build system. If the coverage of your unit tests falls below a certain level, you can handle it as a failed build. This gives your colleagues a measure, a way to see when the fail to deliver tested code.
In your case, NCover seems to integrate nicely into CC.NET.
Create a matrix that you want developers to achieve.
Make sure there is a subtask for Unit test creation for every story.
No. In my experience, TDD isn't all that useful in practice. I sometimes use it for really general fundamental classes (like geometry or generic data structures) that lend themselves naturally to automated tests. But for UI components or business logic, I find it's more trouble than it's worth.