As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
When should I use specs for Rails application and when Cucumber (former rspec-stories)? I know how both work and actively use specs, of course. But it still feels weird to use Cucumber. My current view on this, is that it's convenient to use Cucumber when you're implementing application for the client and do not understand how the whole system is supposed to work yet.
But what if I'm doing my own project? For most of the time, I know how the parts of the system interact. All I need to do is to write a bunch of unit-tests. What are the possible situations when I would need Cucumber then?
And, as a corresponding second question: do I have to write specs if I write Cucumber stories? Wouldn't it be double-testing of the same thing?
If you haven't already, you might want to check out Dan North's excellent article, What's in a Story? as a starting point.
We have two main uses for Cucumber stories. First, because the story form is very specific it helps focus the product owner's articulation of the features he wants built. This is the "token for a conversation" use of stories, and would be valuable whether or not we implemented the stories in code. Second, when the process is working well enough that we have complete stories before we begin writing the feature (more of an ideal that we strive for than a daily reality), you have your acceptance criteria spelled out clearly and you know exactly what and how much to build.
In our Rails work, Cucumber stories do not substitute for rspec unit tests. The two go hand in hand. In practice, the unit tests tend to drive development of the models and controllers, and the stories tend to drive development of the views (we tend not to write rspec for our views) and provide a good test of the application as a whole from the user's perspective.
If you're working solo, the communication aspect may not be that interesting to you, but the integration testing you get from Cucumber might be. If you take advantage of webrat, writing Cucumber can be fast and painless for a lot of your basic functionality.
Think of it as a cycle:
Write your Cucumber feature, then while developing the pieces for that feature, write specs to complete the individual components. Continue completing specs until you've written enough functionality for the feature to pass, then write your next feature.
My take is that it's a bad idea to use Cucumber in most situations due to the costs in productivity its syntax incurs on you. I wrote extensively on the topic in Why Bother With Cucumber Tests?
A Cucumber story is more a description of the overall problem your application is solving, rather than if individual bits of code work (i.e. unit tests).
As Abie describes, it's almost a list of requirements that the application should meet, and is very helpful for communication with your client, as well as being directly testable.
Nowadays you can use rspec with Capybara and Selenium Webdriver and avoid having to build and maintain all of the Cucumber story parsers. Here is what I would recommend:
Write out your story
Using RSpec, I would create an integration test ex: spec/integrations/socks_rspec.rb
Then I would create an integration test which includes a new describe and it block for each scenario
Then I would implement the minimal functionality require to get the integration test and while going deeper back (into controllers and models, etc) I would TDD on controllers and models.
As you come back up your integration test should pass and you can continue to add steps to the integration test
repeat
One thing to note, however, is that the controller and integration tests have overlap that may not be necessary so you have to use your best judgement so you do not waste your time.
Also, once you find your groove you will find it most enjoyable to develop using BDD, until then don't feel guilty if you don't feel like you are doing it perfect and don't over think it. You will do great!
But what if I'm doing my own project? For most of the time, I know how the parts of the system interact. All I need to do is to write a bunch of unit-tests. What are the possible situations when I would need Cucumber then?
You still need Cucumber. You need it to document how you see the system working, and you need it to make sure you haven't broken functionality when you change things.
In other words, you need Cucumber stories for the same reasons as you need unit tests -- they just work on a higher level of abstraction.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am aware of this question: https://stackoverflow.com/questions/428691/how-to-encourage-implementation-of-tdd
In my team, we write a lot of unit tests. But, in general the programmers tend to write unit tests after wrting the code. So, we first finish the module functionality and then write tests. Our coverage is around 70% for most modules. I have tried convincing my technical manager and my team members to do pure TDD wherein we first write tests and then the code, but invain. I think writing tests first allows us to discover design better. Am I just being finicky, especially when our coverage is quite high? If the answer to this question is no, then how do I talk to people to have a test-first approach.
EDIT: I think writing tests after writing code is an easier thing to do. People in my team have got accustomed to do this and are opposing any change.
I don't know that there is a whole lot you can tell people to convince them of the value of TDD. You can cite what the experts have told us about it, and your own personal experiences, but if folks are not willing to give it a try, chances are low that you sharing this information with them will help.
My experience with TDD was basically that it sounded like a really good idea, but it never really worked out the way it was supposed to. Then one day I tried it again on a new task and ended up with a solution to the problem that was simpler than what I would have thought possible, due entirely to the fact that I had used TDD. I think when developers have this sort of experience it changes the way they look at things, and makes them more willing to try it in other situations.
The challenge is being able to demonstrate this to the other developers. One way you may be able to do this is with the use of a TDD Kata like this one from Roy Osherove (he uses it in his TDD Master Course). It is designed specifically to demonstrate the value in working in small steps, implementing only the code that is needed to make each test pass. This may show folks how the process works, and make them more comfortable with giving it a try.
There was also a coding exercise I heard about where you gave two groups/teams of developers a reasonably simple task, and asked one of the groups to use TDD, and make sure they followed the "simplest thing that could possibly work" rules, while the other team did things however they wanted. Then, once that is done, you have the teams switch tasks, but throw out the code written by each team, leaving only the tests. The teams are then supposed to recreate the code for the task. Typically you will find that the team who inherits the TDD code has a much easier time doing this.
Given all that, though, I think the best thing you can do personally is to start doing TDD yourself for as much of your work as possible. This has the potential to give you some very specific references for where and how TDD has proved to be beneficial within the context of the current project. In particular if you do code reviews your peers may notice the code that you are writing TDD is more concise, and easier to maintain than the code that has been writing without TDD. Your QA team may also notice a difference in the quality of the code, which is one of the things that you hear a lot about companies who move to TDD.
A couple suggestions. Your practicality may vary:
Win over one or two people: your boss, an intern, etc, over to your side first. Your first follower will make you a leader.
Start pair programming or mentoring. Even if its just with an intern or two, working closely with someone can be a good way to influence their style. If you are willing, you could try becoming a manager.
Give a technical presentation on the subject. Make the focus on the why and the problem you are solving, instead of TDD. You want people to buy into the problem rather than your specific solution. Include a couple other alternatives so it doesn't seem like you are just trying to push what works for you.
Get some outside training from Object Mentor or the like. Works best if you can convince your boss and the team isn't a bunch of hardened soulless cynics.
To be honest, you should use always just use a development/test cycle that works.
A lot of people like TDD, and a lot of big players like Google have embraced it; because of the high test coverage.
However, it seems that you and your team tend to be doing pretty well without it—and remember, and change in development style decreases productivity at least temporarily. So remember the old adage, don't change what works.
However, if you and your customers are finding that there still are a lot of bugs that the tests don't cover, TDD is an ideal way to up that—so you should tell management that TDD is a way to increase customer satisfaction, thus make money. (That's management-speak for you!)
Perhaps Leading by example can help:
Start working like this yourself
Perhaps create a tutorial\script to setup the environment (the IDE) that will not add overhead to the TDD process:
Run the tests in a single keyboard shortcut
The GUI of the test system should be present in the development view (not just in the testing view, so you don't have to move between them)
I am guessing that after a while, people will be curious and ask you if this TDD thing really works, you should have a prepared answer for that question :-)
Have you come across BDD at all? There's an associated change in vocabulary which I find really helps newcomers to TDD pick it up. Here's the vocab change:
http://lizkeogh.com/2009/11/06/translating-tdd-to-bdd/
I've found that using this language helps people focus on why it's useful to write the tests (or examples) first. I translated another example in the comments.
Even then, sometimes it's helpful to learn how tests are structured. If people have trouble learning how to write them first, writing them afterwards is a good learning step. You're right about the design benefits. It can take a while to grok.
In the past I've found that the best way to get TDD is to have a safe environment to practice in. Having my own toy app or running / attending workshops based on a toy app have both helped me a lot.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been building IMO a really cool RIA. But its now close to completion and I need to test it to see if there are any bugs or counter-intuitive parts or anything like that. But how? Anytime I ask someone to try to break it, they look at it for like 3 minutes and say "it's solid". How do you guys test things? I have never used a UnitTest before, actually about 3 months ago I never even heard of a unit-test, and I still don't really understand what it is. Would I have to build a whole new application to run every function? That would take forever, plus some functions may only produce errors in certain situations, so I do not understand unit tests.
The question is pretty open-ended so this post won't answer all your question. If you can refine what you are looking for, that would help.
There are two major pieces of testing you likely want to do. The first is unit testing and the second is what might be called acceptance testing.
Unit testing is trying each of the classes/methods in relative isolation and making sure they work. You can use something like jUnit, nUnit, etc. as a framework to hold your tests. Take a method and look at what the different inputs it might expect and what its outcome is. Then write a test case for each of these input/output pairs. This will tell you that most of the parts work as intended.
Acceptance testing (or end-to-end testing as it is sometimes called) is running the whole system and making sure it works. Come up with a list of scenarios you expect users to do. Now systematically try them all. Try variations of them. Do they work? If so, you are likely ready to roll it out to at least a limited audience.
Also, check out How to Break Software by James Whittaker. It's one of the better testing books and is a short read.
First thing is to systematically make sure everything works in the manner you expect it to. Then you want to try it against every realistic hardware with software installed combination that is feasible and appropriate. Then you want to take every point of human interaction and try putting as much data in, no data in, and special data that may cause exceptions. The try doing things in an order or workflow you did not expect sometimes certain actions depend on others. You and your friends will naturally do those steps in order, what happens when someone doesn't? Also, having complete novices use it is a good way to see odd things users might try.
Release it in beta?
It's based on Xcode and Cocoa development, but this video is still a great introduction to unit testing. Unit testing is really something that should be done alongside development, so if your application is almost finished it's going to take a while to implement.
Firebug has a good profiler for web apps. As for testing JS files, I use Scriptaculous. Whatever backend you are using needs to be fully tested too.
But before you do that, you need to understand what unit testing is. Unit testing is verifying that all of the individual units of source code function as they are intended. This means that you verify the output of all of your functions/methods. Basically, read this. There are different testing strategies beyond unit testing such as integration testing, which is testing that different modules integrate with one another. What you are asking people to do is Acceptance testing, which is verifying that it looks and behaves according to the original plan. Here is more on various testing strategies.
PS: always test boundary conditions
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are some of the tricks or tools or policies (besides having a unit testing standard) that you guys are using to write better unit tests? By better I mean 'covers as much of your code in as few tests as possible'. I'm talking about stuff that you have used and saw your unit tests improve by leaps and bounds.
As an example I was trying out Pex the other day and I thought it was really really good. There were tests I was missing out and Pex easily showed me where. Unfortunately it has a rather restrictive license.
So what are some of the other great stuff you guys are using/doing?
EDIT: Lots of good answers. I'll be marking as correct the answer that I'm currently not practicing but will definitely try and that hopefully gives the best gains. Thanks to all.
Write many tests per method.
Test the smallest thing possible. Then test the next smallest thing.
Test all reasonable input and output ranges. IOW: If your method returns boolean, make sure to test the false and true returns. For int? -1,0,1,n,n+1 (proof by mathematical induction). Don't forget to check for all Exceptions (assuming Java).
4a. Write an abstract interface first.
4b. Write your tests second.
4c. Write your implementation last.
Use Dependency Injection. (for Java: Guice - supposedly better, Spring - probably good enough)
Mock your "Unit's" collaborators with a good toolkit like mockito (assuming Java, again).
Google much.
Keep banging away at it. (It took me 2 years - without much help but for google - to start "getting it".)
Read a good book about the topic.
Rinse, repeat...
Write tests before you write the code (ie: Test Driven Development). If for some reason you are unable to write tests before, write them as you write the code. Make sure that all the tests fail initially. Then, go down the list and fix each broken one in sequence. This approach will lead to better code and better tests.
If you have time on your side, then you may even consider writing the tests, forgetting about it for a week, and then writing the actual code. This way you have taken a step away from the problem and can see the problem more clearly now. Our brains process tasks differently if they come from external or internal sources and this break makes it an external source.
And after that, don't worry about it too much. Unit tests offer you a sanity check and stable ground to stand on -- that's all.
On my current project we use a little generation tool to produce skeleton unit tests for various entities and accessors, it provides a fairly consistent approach for each modular unit of work which needs to be tested, and creates a great place for developers to test out their implementations from (i.e the unit test class is added when the rest of the entities and other dependencies are added by default).
The structure of the (templated) tests follows a fairly predictable syntax, and the template allows for implementation of module/object-specific buildup/tear down (we also use a base class for all the tests to encapsule some logic).
We also create instances of entities (and assign test data values) in static functions so that objects can be created programatically and used within different test scenarios and across test classes, whcih is proving to be very helpful.
Read a book like The Art of Unit Testing will definitely help.
As far as policy goes read Kent Beck's answer on SO, particularly:
to test as little as possible to reach a given level of confidence
Write pragmatic unit tests for tricky parts of your code and don't lose site of the fact that it's the program you are testing that's important not the unit tests.
I have a ruby script that generates test stubs for "brown" code that wasnt built with TDD. It writes my build script, sets up includes/usings and writes a setup/teardown to instantiate the test class in the stub. Helps to start with a consistent starting point without all the typing tedium when I hack at code written in the Dark Times.
One practice I've found very helpful is the idea of making your test suite isomorphic to the code being tested. That means that the tests are arranged in the same order as the lines of code they are testing. This makes it very easy to take a piece of code and the test suite for that code, look at them side-by-side and step through each line of code to verify there is an appropriate test. I have also found that the mere act of enforcing isomorphism like this forces me to think carefully about the code being tested, such as ensuring that all the possible branches in the code are exercised by tests, or that all the loop conditions are tested.
For example, given code like this:
void MyClass::UpdateCacheInfo(
CacheInfo *info)
{
if (mCacheInfo == info) {
return;
}
info->incrRefCount();
mCacheInfo->decrRefCount();
mCacheInfo = info
}
The test suite for this function would have the following tests, in order:
test UpdateCacheInfo_identical_info
test UpdateCacheInfo_increment_new_info_ref_count
test UpdateCacheInfo_decrement_old_info_ref_count
test UpdateCacheInfo_update_mCacheInfo
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Say we have realized a value of TDD too late. Project is already matured, good deal of customers started using it.
Say automated testing used are mostly functional/system testing and there is a good deal of automated GUI testing.
Say we have new feature requests, and new bug reports (!). So good deal of development still goes on.
Note there would already be plenty of business object with no or little unit testing.
Too much collaboration/relationships between them, which again is tested only through higher level functional/system testing. No integration testing per se.
Big databases in place with plenty of tables, views, etc. Just to instantiate a single business object there already goes good deal of database round trips.
How can we introduce TDD at this stage?
Mocking seems to be the way to go. But the amount of mocking we need to do here seems like too much. Sounds like elaborate infrastructure needs to be developed for the mocking system working for existing stuff (BO, databases, etc.).
Does that mean TDD is a suitable methodology only when starting from scratch? I am interested to hear about the feasible strategies to introduce TDD in an already mature product.
Creating a complex mocking infrastructure will probably just hide the problems in your code. I would recommend that you start with integration tests, with a test database, around the areas of the code base that you plan to change. Once you have enough tests to ensure that you won't break anything if you make a change, you can start to refactor the code to make it more testable.
Se also Michael Feathers excellent book Working effectively with legacy code, its a must read for anyone thinking of introducing TDD into a legacy code base.
I think its completely feasible to introduce TDD into an existing application, in fact I have recently done it myself.
It is easiest to code new functionality in a TDD way and restructuring the existing code to accommodate this. This way you start of with a small section of your code tested but the effects start to spread through the whole code base.
If you've got a bug, then write a unit test to reproduce it, refactoring the code as necessary (unless the effort is really not worth it).
Personally, I don't think there's any need to go crazy and try and retrofit tests into the existing system as that can be very tedious without a great amount of benefit.
In summary, start small and your project will become more and more test infected.
Yes you can. From your description the project is in a good shape - solid amount of functional tests automation is a way to go! In some aspects its even more useful than unit testing. Remember that TDD != unit testing, it's all about short iterations and solid acceptance criteria.
Please remember that having an existing and accepted project actually makes testing easier - working application is the best requirements specification. So you're in a better position than someone who just have a scrap of paper to work with.
Just start working on your new requirements/bug fixes with an TDD. Remember that there will be an overhead associated with switching the methodology (make sure your clients are aware of this!) and probably expect a good deal of reluctance from the team members who are used to the 'good old ways'.
Don't touch the old things unless you need to. If you will have an enhancement request which will affect existing stuff then factor in extra time for doing the extra set-up things.
Personally I don't see much value in introducing a complex infrastructure for mock-ups - surely there is a way to achieve the same results in a lightweight mode but it obviously depends on your circumstances
One tool that can help you testing legacy code (assuming you can't\won't have the time to refactor it, is Typemock Isolator: Typemock.com
It allows injecting dependencies into existing code without needing to extract interfaces and such because it does not use standard reflection techniques (dynamic proxy etc..) but uses the profiler APIs instead.
It's been used to test apps that rely on sharepoint, HTTPContext and other problematic areas.
I recommend you take a look.
(I work as a dev in that company, but it is the only tool that does not force you to refactor existing legacy code, saving you time and money)
I would also highly recommend "Working effectively with legacy code" for more techniques.
Roy
Yes you can. Don't do it all at once, but introduce just what you need to test a module whenever you touch it.
You can also start with more high level acceptance tests and work your way down from there (take a look at Fitnesse for this).
I would start with some basic integration tests. This will get buy-in from the rest of the staff. Then start to separate the parts of your code which have dependencies. Work towards using Dependency Injection as it will make your code much more testable. Treat bugs as an opportunity to write testable code.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
OK, I know there have already been questions about getting started with TDD.. However, I guess I kind of know the general concensus is to just do it , However, I seem to have the following problems getting my head into the game:
When working with collections, do will still test for obvious add/remove/inserts successful, even when based on Generics etc where we kind of "know" its going to work?
Some tests seem to take forever to implement.. Such as when working with string output, is there a "better" way to go about this sort of thing? (e.g. test the object model before parsing, break parsing down into small ops and test there) In my mind you should always test the "end result" but that can vary wildly and be tedious to set up.
I don't have a testing framework to use (work wont pay for one) so I can "practice" more. Are there any good ones that are free for commercial use? (at the moment I am using good 'ol Debug.Assert :)
Probably the biggest.. Sometimes I don't know what to expect NOT to happen.. I mean, you get your green light but I am always concerned that I may be missing a test.. Do you dig deeper to try and break the code, or leave it be and wait for it all fall over later (which will cost more)..
So basically what I am looking for here is not a " just do it " but more " I did this, had problems with this, solved them by this ".. The personal experience :)
First, it is alright and normal to feel frustrated when you first start trying to use TDD in your coding style. Just don't get discouraged and quit, you will need to give it some time. It is a major paradigm shift in how we think about solving a problem in code. I like to think of it like when we switched from procedural to object oriented programming.
Secondly, I feel that test driven development is first and foremost a design activity that is used to flesh out the design of a component by creating a test that first describes the API it is going to expose and how you are going to consume it's functionality. The test will help shape and mold the System Under Test until you have been able to encapsulate enough functionality to satisfy whatever tasks you happen to be working on.
Taking the above paragraph in mind, let's look at your questions:
If I am using a collection in my system under test, then I will setup an expectation to make sure that the code was called to insert the item and then assert the count of the collection. I don't necessarily test the Add method on my internal list. I just make sure it was called when the method that adds the item is called. I do this by adding a mocking framework into the mix, with my testing framework.
Testing strings as output can be tedious. You cannot account for every outcome. You can only test what you expect based on the functionality of the system under test. You should always break your tests down to the smallest element that it is testing. Which means you will have a lot of tests, but tests that are small and fast and only test what they should, nothing else.
There are a lot of open source testing frameworks to choose from. I am not going to argue which is best. Just find one you like and start using it.
MbUnit
nUnit
xUnit
All you can do is setup your tests to account for what you want to happen. If a scenario comes up that introduces a bug in your functionality, at least you have a test around the functionality to add that scenario into the test and then change your functionality until the test passes. One way to find where we may have missed a test is to use code coverage.
I introduced you to the mocking term in the answer for question one. When you introduce mocking into your arsenal for TDD, it dramatically makes testing easier to abstract away the parts that are not part of the system under test. Here are some resources on the mocking frameworks out there are:
Moq: Open Source
RhinoMocks: Open Source
TypeMock: Commercial Product
NSubstitute: Open Source
One way to help in using TDD, besides reading about the process, is to watch people do it. I recommend in watching the screen casts by JP Boodhoo on DNRTV. Check these out:
Jean Paul Boodhoo on Test Driven Development Part 1
Jean Paul Boodhoo on Test Driven Development Part 2
Jean Paul Boodhoo on Demystifying Design Patterns Part 1
Jean Paul Boodhoo on Demystifying Design Patterns Part 2
Jean Paul Boodhoo on Demystifying Design Patterns Part 3
Jean Paul Boodhoo on Demystifying Design Patterns Part 4
Jean Paul Boodhoo on Demystifying Design Patterns Part 5
OK, these will help you see how the terms I introduced are used. It will also introduce another tool called Resharper and how it can facilitate the TDD process. I couldn't recommend this tool enough when doing TDD. Seems like you are learning the process and you are just finding some of the problems that have already been solved with using other tools.
I think I would be doing an injustice to the community, if I didn't update this by adding Kent Beck's new series on Test Driven Development on Pragmatic Programmer.
From my own experience:
Only test your own code, not the underlying framework's code. So if you're using a generic list then there's no need to test Add, Remove etc.
There is no 2. Look over there! Monkeys!!!
NUnit is the way to go.
You definitely can't test every outcome. I test for what I expect to happen, and then test a few edge cases where I expect to get exceptions or invalid responses. If a bug comes up down the track because of something you forgot to test, the first thing you should do (before trying to fix the bug) is write a test to prove that the bug exists.
My take on this is following:
+1 for not testing framework code, but you may still need to test classes derived from framework classes.
If some class/method is cumbersome to test it may be strong indication that something is wrong with desing. I try to follow "1 class - 1 responsibility, 1 method - 1 action" principle. That way you will be able to test complex methods much easier by doing that in smaller portions.
+1 for xUnit. For Java you may also consider TestNG.
TDD is not single event it is a process. So do not try to envision everything from the beginning, but make sure that every bug found in code is actually covered by test once discovered.
I think the most important thing with (and actually one of the great outcomes of, in a somewhat recursive manner) TDD is successful management of dependencies. You have to make sure that modules are tested in isolation with no elaborate setup needed. For example, if you're testing a component that eventually sends an email, make the email sender a dependency so that you can mock it in your tests.
This leads to a second point - mocks are your friends. Get familiarized with mocking frameworks and the style of tests they promote (behavioral, as opposed to the classic state based), and the design choices they encourage (The "Tell, don't ask" principle).
I found that the principles illustrated in the Three Index Cards to Easily Remember the Essence of TDD is a good guide.
Anyway, to answer your questions
You don't have to test something you "know" is going to work, unless you wrote it. You didn't write generics, Microsoft did ;)
If you need to do so much for your test, maybe your object/method is doing too much as well.
Download TestDriven.NET to immediately start unit testing on your Visual Studio, (except if it's an Express edition)
Just test the correct thing that will happen. You don't need to test everything that can go wrong: you have to wait for your tests to fail for that.
Seriously, just do it, dude. :)
I am no expert at TDD, by any means, but here is my view:
If it is completely trivial (getters/setters etc) do not test it, unless you don't have confidence in the code for some reason.
If it is a quite simple, but non-trivial method, test it. The test is probably easy to write anyway.
When it comes to what to expect not to happen, I would say that if a certain potential problem is the responsibility of the class you are testing, you need to test that it handles it correctly. If it is not the current class' responsibility, don't test it.
The xUnit testing frameworks are often free to use, so if you are a .Net guy, check out NUnit, and if Java is your thing check out JUnit.
The above advice is good, and if you want a list of free frameworks you have to look no farther than the xUnit Frameworks List on Wikipedia. Hope this helps :)
In my opinion (your mileage may vary):
1- If you didn't write it don't test it. If you wrote it and you don't have a test for it it doesn't exist.
3- As everyone's said, xUnit's free and great.
2 & 4- Deciding exactly what to test is one of those things you can debate about with yourself forever. I try to draw this line using the principles of design by contract. Check out 'Object Oriented Software Construction" or "The Pragmatic Programmer" for details on it.
Keep tests short, "atomic". Test the smallest assumption in each test. Make each TestMethod independent, for integration tests I even create a new database for each method. If you need to build some data for each test use an "Init" method. Use mocks to isolate the class your testing from it's dependencies.
I always think "what's the minimum amount of code I need to write to prove this works for all cases ?"
Over the last year I have become more and more convinced of the benefits of TDD.
The things that I have learned along the way:
1) dependency injection is your friend. I'm not talking about inversion of control containers and frameworks to assemble plugin architectures, just passing dependencies into the constructor of the object under test. This pays back huge dividends in the testability of your code.
2) I set out with the passion / zealotry of the convert and grabbed a mocking framework and set about using mocks for everything I could. This led to brittle tests that required lots of painful set up and would fall over as soon as I started any refactoring. Use the correct kind of test double. Fakes where you just need to honour an interface, stubs to feed data back to the object under test, mock only where you care about interaction.
3) Test should be small. Aim for one assertion or interaction being tested in each test. I try to do this and mostly I'm there. This is about robustness of test code and also about the amount of complexity in a test when you need to revisit it later.
The biggest problem I have had with TDD has been working with a specification from a standards body and a third party implementation of that standard that was the de-facto standard. I coded lots of really nice unit tests to the letter of the specification only to find that the implementation on the other side of the fence saw the standard as more of an advisory document. They played quite loose with it. The only way to fix this was to test with the implementation as well as the unit tests and refactor the tests and code as necessary. The real problem was the belief on my part that as long as I had code and unit tests all was good. Not so. You need to be building actual outputs and performing functional testing at the same time as you are unit testing. Small pieces of benefit all the way through the process - into users or stakeholders hands.
Just as an addition to this, I thought I would say I have put a blog post up on my thoughts on getting started with testing (following this discussion and my own research), since it may be useful to people viewing this thread.
"TDD – Getting Started with Test-Driven Development" - I have got some great feedback so far and would really appreciate any more that you guys have to offer.
I hope this helps! :)