Related
As I am writing tests, some of them have a lot of logic in them. Most of this logic could easily be unit tested, which would provide a higher level of trust in the tests.
I can see a way to do this, which would be to create a class TestHelpers, to put in /classes, and write tests for TestHelpers along with the regular tests.
I could not find any opinion on such a practice on the web, probably because the keywords to the problem are tricky ("tests for tests").
I am wondering whether this sounds like good practice, whether people have already done this, whether there is any advice on that, whether this points to bad design, or something of the sort.
I am running into this while doing characterization tests. I know there are some frameworks for it, but I am writing it on my own, because it's not that complicated, and it gives me more clarity. Also, I can imagine that one can easily run into the same issue with unit tests.
To give an example, at some point I am testing a function that connects to Twitter's API service and retrieves some data. In order to test that the data is correct, I need to test whether it's a json encoded string, whether the structure matches twitter's data structure, whether each value has the correct type, etc. The function that does all these checks with the retrieved data would typically be interesting to test on its own.
Any idea or opinion on this practice ?
One of the aphorisms about TDD is that "the tests test the code, and the code tests the tests." That is, because of the red-green-refactor cycle, you see the code fail the test and then after making it work, you see it pass the test - and that alone is enough to give you pretty good confidence that the test (and all the test utility code it calls) works correctly. For characterization tests, you don't have this red-green-refactor cycle, so it may be of value for you to write tests for your test utility methods.
I think it's too much to test tests themselves. alfasin right, who would test tests for tests then? And it is no coincidence that you can't find much info on this topic. That's because it's just not a common practice. Usually, well-written tests should cover arrangement logic in tests themselves. But I understand your aspiration - how to be sure that test is "well-written"? The most dangerous thing here is to have passing test, that should normally fail(but passes due to bug in it). Having such a test is even worse than not having test at all. But to be honest, I did not have much such cases in practice. My advice is just to focus on writing good tests that cover all execution paths of your logic and I think you'll be fine :)
While it sounds perverse, I have on occasion written automated tests for some of my testing infrastructure, if it gets fairly complex. The tests that test this testing infrastructure then tend to be simple, so the "what about testing the tests for the test?" question becomes moot, in my experience.
Note that this is mainly occurring in a library designed explicitly to aid testing for other people (people writing Qt apps, in this case), though I have done it for some stand-alone apps before: for example, when writing tests for Kate's Vim mode's auto-completion integration, the fake auto-completer test-helper code used for mimicking the auto-completion for a variety of configurations got complex enough that I actually started developing it test-first.
And it's probably worth mentioning that e.g. Google Mock has hundreds of tests written for it :)
I am currently making some Acceptance-Tests that will help drive the design of a program I am about to do. Everything seems fine except I've realized that the Acceptance-Tests are kinda complex, that is, although they are conceptually simple, they require quite a bit of tricky code to run. I'll need to make a couple of "helper" classes for my Acceptance-Tests.
My question is in how to develop them:
Make Unit-Tests of my Acceptance-Tests(this seems odd -- has anyone done anything like it?)
Make Unit-Tests for those help classes. After I have done all the code of those help classes, I can pass and start working on the real Unit-Tests of my System. When using this approach, where would you put the helper classes? In the tests' project or in the real project? They don't necessarily have dependencies on testing/mocking frameworks.
Any other idea?
A friend is very keen on the notion that acceptance tests tell you whether your code is broken, while unit tests tell you where it's broken; these are complementary and valuable bits of information. Acceptance tests are better at letting you know when you're done.
But to get done, you need all the pieces along the way; you need them to be working, and that's what unit tests are great at. Done test-first, they'll also lead you to better design (not just code that works). A good approach is to write a big-picture acceptance test, and say to yourself: "when this is passing, I'm done." Then work to make it pass, working TDD: write a small unit test for the next little bit of functionality you need to make the AT pass; write the code to make it pass; refactor; repeat. As you progress, run the AT from time to time; you will probably find it failing later and later in the test. And, as mentioned above, when it's passing, you're done.
I don't think unit testing the acceptance test itself makes much sense. But unit testing its helper classes - indeed, writing them test-first - is a very good way to go. You're likely to find some methods that you write "just for the test" working their way into the production code - and even if you don't, you still want to know that the code your ATs use is working right.
If your AT is simple enough, the old adage of "the test tests the code, and the code tests the test" is probably sufficient - when you have a failing test, it's either because the code is wrong or because the test is wrong, and it should be easy enough to figure out which. But when the test is complicated, it's good to have its underpinnings well tested, too.
If you think of software development like a car production plant, then the act of writing software is like developing a new car. Each component is tested separately because it's new. It's never been done before. (If it has, you can either get people who've done it before or buy it off the shelf.)
Your build system, which builds your software and also which tests it, is like the conveyor belt - the process which churns out car after car after car. Car manufacturers usually consider how they're going to automate production of new components and test their cars as part of creating new ones, and you can bet that they also test the machines which produce those cars.
So, yes, unit-testing your acceptance tests seems perfectly fine to me, especially if it helps you go faster and keep things easier to change.
There is nothing wrong with using unit test framework (like JUnit) to write acceptance tests (or integration tests). People don't like calling them 'unit tests' for many reasons. To me the main reason is that integration/acceptance tests won't run every time one checks in the changes (too long or/and no proper environment).
Your helper classes are rather standard thing that comprise "test infrastructure code". They don't belong anywhere else but test code. It's your choice to test them or not. But without them your tests won't be feasible in big systems.
So, your choice is #2 or no tests of tests at all. There is nothing wrong with refactoring infrastructure code to make it more transparent and simple.
Your option 2 is the way I'd do it: Write helper classes test-first - it sounds like you know what they should do.
The tests are tests, and although the helper classes are not strictly tests, they won't be referenced by your main code, just the tests, so they belong with the tests. Perhaps they could be in a separate package/namespace from regular tests.
What kind of practices do you use to make your code more unit testing friendly?
TDD -- write the tests first, forces
you to think about testability and
helps write the code that is actually
needed, not what you think you may
need
Refactoring to interfaces -- makes
mocking easier
Public methods virtual if not using
interfaces -- makes mocking easier
Dependency injection -- makes mocking
easier
Smaller, more targeted methods --
tests are more focused, easier to
write
Avoidance of static classes
Avoid singletons, except where
necessary
Avoid sealed classes
Dependency injection seems to help.
Write the tests first - that way, the tests drive your design.
Use TDD
When writing you code, utilise dependency injection wherever possible
Program to interfaces, not concrete classes, so you can substitute mock implementations.
Make sure all of your classes follow the Single Responsibility Principle. Single responsibility means that each class should have one and only one responsibility. That makes unit testing much easier.
I'm sure I'll be down voted for this, but I'm going to voice the opinion anyway :)
While many of the suggestions here have been good, I think it needs to be tempered a bit. The goal is to write more robust software that is changeable and maintainable.
The goal is not to have code that is unit testable. There's a lot of effort put into making code more "testable" despite the fact that testable code is not the goal. It sounds really nice and I'm sure it gives people the warm fuzzies, but the truth is all of those techniques, frameworks, tests, etc, come at a cost.
They cost time in training, maintenance, productivity overhead, etc. Sometimes it's worth it, sometimes it isn't, but you should never put the blinders on and charge ahead with making your code more "testable".
When writing tests (as with any other software task) Don't Repeat Yourself (DRY principle). If you have test data that is useful for more then one test then put it someplace where both tests can use it. Don't copy the code into both tests. I know this seems obvious but I see it happen all the time.
I use Test-Driven Development whenever possible, so I don't have any code that cannot be unit tested. It wouldn't exist unless the unit test existed first.
The easiest way is don't check in your code unless you check in tests with it.
I'm not a huge fan of writing the tests first. But one thing I believe very strongly in is that code must be checked in with tests. Not even an hour or so before, togther. I think the order in which they are written is less important as long as they come in together.
Small, highly cohesive methods. I learn it the hard way. Imagine you have a public method that handles authentication. Maybe you did TDD, but if the method is big, it will be hard to debug. Instead, if that #authenticate method does stuff in a more pseudo-codish kind of way, calling other small methods (maybe protected), when a bug shows up, it's easy to write new tests for those small methods and find the faulty one.
And something that you learn the first thing in OOP, but so many seems to forget: Code Against Interfaces, Not Implementations.
Spend some time refactoring untestable code to make it testable. Write the tests and get 95% coverage. Doing that taught me all I need to know about writing testable code. I'm not opposed to TDD, but learning the specifics of what makes code testable or untestable helps you to think about testability at design time.
Don't write untestable code
1.Using a framework/pattern like MVC to separate your UI from you
business logic will help a lot.
2. Use dependency injection so you can create mock test objects.
3. Use interfaces.
Check up this talk Automated Testing Patterns and Smells.
One of the main take aways for me, was to make sure that the UnitTest code is in high quality. If the code is well documented and well written, everyone will be motivated to keep this up.
No Statics - you can't mock out statics.
Also google has a tool that will measure the testability of your code...
I'm continually trying to find a process where unit testing is less of a chore and something that I actually WANT to do. In my experience, a pretty big factor is your tools. I do a lot of ActionScript work and sadly, the tools are somewhat limited, such as no IDE integration and lack of more advanced mocking frameworks (but good things are a-coming, so no complaints here!). I've done test driven development before with more mature testing frameworks and it was definately a more pleasurable experience, but still felt like somewhat of a chore.
Recently however I started writing code in a different manner. I used to start with writing the test, watching them fail, writing code to make the test succeed, rinse and repeat and all that.
Now however, I start with writing interfaces, almost no matter what I'm going to do. At first I of course try to identify the problem and think of a solution. Then I start writing the interfaces to get a sort of abstract feel for the code and the communication. At that point, I usually realize that I haven't really figured out a proper solution to the problem at all as a result of me not fully understanding the problem. So I go back, revise the solution and revise my interfaces. When I feel that the interfaces reflect my solution, I actually start with writing the implementation, not the tests. When I have something implemented (draft implementationd, usually baby steps), I start testing it. I keep going back between testing and implementing, a few steps forward at a time. Since I have interfaces for everything, it's incredibly easy to inject mocks.
I find working like this, with classes having very little knowledge of other implementation and only talking to interfaces, is extremely liberating. It frees me from thinking about the implementation of another class and I can focus on the current unit. All I need to know is the contract that the interface provides.
But yeah, I'm still trying to work out a process that works super-fantastically-awesomely-well every time.
Oh, I also wanted to add that I don't write tests for everything. Vanilla properties that don't do much but get/set variables are useless to test. They are garuanteed by the language contract to work. If they don't I have way worse problems than my units not being testable.
To prepare your code to be testable:
Document your assumptions and exclusions.
Avoid large complex classes that do more than one thing - keep the single responsibility principle in mind.
When possible, use interfaces to decouple interactions and allow mock objects to be injected.
When possible, make pubic method virtual to allow mock objects to emulate them.
When possible, use composition rather than inheritance in your designs - this also encourages (and supports) encapsulation of behaviors into interfaces.
When possible, use dependency injection libraries (or DI practices) to provide instances with their external dependencies.
To get the most out of your unit tests, consider the following:
Educate yourself and your development team about the capabilities of the unit testing framework, mocking libraries, and testing tools you intend to use. Understanding what they can and cannot do will be essential when you actually begin writing your tests.
Plan out your tests before you begin writing them. Identify the edge cases, constraints, preconditions, postconditions, and exclusions that you want to include in your tests.
Fix broken tests as near to when you discover them as possible. Tests help you uncover defects and potential problems in your code. If your tests are broken, you open the door to having to fix more things later.
If you follow a code review process in your team, code review your unit tests as well. Unit tests are as much a part of your system as any other code - reviews help to identify weaknesses in the tests just as they would for system code.
You don't necessarily need to "make your code more unit testing friendly".
Instead, a mocking toolkit can be used to make testability concerns go away.
One such toolkit is JMockit.
Do you think Unit Tests are a good way to show your fellow programmers how to use an API?
I was listening to the Stackoverflow Podcast this week and I now realize that unit testing is not appropriate in all situations (I.E. it can cost you time if you go for 100% code-coverage). I agree with this as I have suffered from the "OCD code coverage disorder in the past), and have now mended my ways.
However to further appropriate my knowledge of the subject, I'd like to know if unit testing is a good way to bring in new programmers that are unfamiliar with the project's APIs. (It sure seems easier than just writing documentation...although I like it when there's documentation later...)
I think Unit testing is a fantastic way to document APIs. It doesn't necessarily replace good commenting or API docs but it is a very practical way to help people get into the nitty gritty of your code. Moreover good unit testing promotes good design, so chances are your code will be easier to understand as a result.
Unit testing, IMO, isn't a substitute for documentation in any way. You write tests to find and exercise the corner cases of your code, to make sure that all boundary conditions are met. These are usually not the most appropriate examples to give to someone who is trying to learn how the method works.
You also typically don't give as much explanation of why what's happening is happening in a unit test.
Finally, the contents of unit tests typically aren't as accessable to documentation generation tools, and are usually separated from the code they're testing (oddities like Python's doctests notwithstanding).
Good documentation usually includes good examples. It's hard for me to imagine a better set of examples than exactly the ones that show what's expected of a correct implementation!
In addition, maintenance is a crucial issue. It's good practice to deal with a defect by adding a test that exposes the defect, and then by making that test succeed without failing prior tests. If unit tests are regarded as part of the documentation, then this new test will be part of the documentation's change log, thus helping subsequent developers learn from the experience.
No. Tests can be used as a secondary reference, they at least have the benefit of being examples that actually compile, but they are not a substitute for good API documentation.
Beware of zealots claiming almost mystical powers for unit tests saying things like "the tests are the design", "the tests are the specification", "the tests are the documentation". No. The tests are the tests and they don't give you an excuse to cut corners elsewhere.
Documentation should not be substituted with unit test code, period.
Documentation is geared towards the programmer that is trying to use, and understand, your API.
Unit tests are geared towards covering corner cases, that although they should be documented, should not be the focus for a new user of your API.
Documentation should have a gradual build-up of complexity, unit tests should be as simple as possible yet as complex as necessary in order to test the functionality.
For instance, you might have many unit tests that look very alike, but have minute differences, just to cover various oddball corner cases and assert the correct behavior.
Rather than having the user of your API decipher these unit tests, figure out the differences, and figure out why this should produce different (or possibly the same) behavior is not a good teaching aid.
Unit tests are for maintainers of your api, the programmers that will fix bugs in it, or add new features to it, or refactor it, or ....
Documentation is for programmers that will use your api.
These are not the same target audiences.
One such subtle difference might be that while a unit test asserts that when a function gets passed a negative value, it will do something specific, the documentation would instead go into details about why that particular solution was picked. If all the documentation does is just rewrite the code into english, then there is not much point in it, so documentation usually is a lot more verbose and explanatory than the code.
Although not a replacement, Unit Tests can be an excellent way of demonstrating the use of a class or API, particularly for library functions where the amount of code to perform tests is minimal.
For example out Math library is well unit-tested. If someone has a question (e.g. how to find information about ray/volume intersections) then the first-place I send them are the unit tests.
(Why do we test something that could be considered so stable? Well for one thing we support five platforms and gradually add platform-specific SIMD implementations based on benchmarking, for another the unit tests provide an excellent framework for adding new platforms or functionality).
FWIW I thought this weeks Podcasts discussion of unit testing was bang-on. Unit Testing (and Test Driven Development) is a great method for developing the components of your application, but for testing the application itself you quickly reach a point where the code needed for testing becomes disproportionate and brittle.
I know that one of the defining principles of Test driven development is that you write your Unit tests first and then write code to pass those unit tests, but is it necessary to do it this way?
I've found that I often don't know what I am testing until I've written it, mainly because the past couple of projects I've worked on have more evolved from a proof of concept rather than been designed.
I've tried to write my unit tests before and it can be useful, but it doesn't seem natural to me.
Some good comments here, but I think that one thing is getting ignored.
writing tests first drives your design. This is an important step. If you write the tests "at the same time" or "soon after" you might be missing some design benefits of doing TDD in micro steps.
It feels really cheesy at first, but it's amazing to watch things unfold before your eyes into a design that you didn't think of originally. I've seen it happen.
TDD is hard, and it's not for everybody. But if you already embrace unit testing, then try it out for a month and see what it does to your design and productivity.
You spend less time in the debugger and more time thinking about outside-in design. Those are two gigantic pluses in my book.
There have been studies that show that unit tests written after the code has been written are better tests. The caveat though is that people don't tend to write them after the event. So TDD is a good compromise as at least the tests get written.
So if you write tests after you have written code, good for you, I'd suggest you stick at it.
I tend to find that I do a mixture. The more I understand the requirements, the more tests I can write up front. When the requirements - or my understanding of the problem - are weak, I tend to write tests afterwards.
TDD is not about the tests, but how the tests drive your code.
So basically you are writing tests to let an architecture evolve naturally (and don't forget to refactor !!! otherwise you won't get much benefit out of it).
That you have an arsenal of regression tests and executable documentation afterwards is a nice sideeffect, but not the main reason behind TDD.
So my vote is:
Test first
PS: And no, that doesn't mean that you don't have to plan your architecture before, but that you might rethink it if the tests tell you to do so !!!!
I've lead development teams for the past 6-7 years. What I can tell for sure is that as a developer and the developers I have worked with, it makes a phenomenal difference in the quality of the code if we know where our code fits into the big picture.
Test Driven Development (TDD) helps us answer "What?" before we answer "How?" and it makes a big difference.
I understand why there may be apprehensions about not following it in PoC type of development/architect work. And you are right it may not make a complete sense to follow this process. At the same time, I would like to emphasize that TDD is a process that falls in the Development Phase (I know it sounds obsolete, but you get the point :) when the low level specification are clear.
I think writing the test first helps define what the code should actually do. Too many times people don't have a good definition of what the code is supposed to do or how it should work. They simply start writing and make it up as they go along. Creating the test first makes you focus on what the code will do.
Not always, but I find that it really does help when I do.
I tend to write them as I write my code. At most I will write the tests for if the class/module exists before I write it.
I don't plan far enough ahead in that much detail to write a test earlier than the code it is going to test.
I don't know if this is a flaw in my thinking or method's or just TIMTOWTDI.
I start with how I would like to call my "unit" and make it compile.
like:
picker = Pick.new
item=picker.pick('a')
assert item
then I create
class Pick
def pick(something)
return nil
end
end
then I keep on using the Pick in my "test" case so I could see how I would like it to be called and how I would treat different kinds of behavior. Whenever I realize I could have trouble on some boundaries or some kind of error/exception I try to get it to fire and get an new test case.
So, in short. Yes.
The ratio doing test before is a lot higher than not doing it.
Directives are suggestion on how you could do things to improve the overall quality or productivity or even both of the end product. They are in no ways laws to be obeyed less you get smitten in a flash by the god of proper coding practice.
Here's my compromise on the take and I found it quite useful and productive.
Usually the hardest part to get right are the requirements and right behind it the usability of your class, API, package... Then is the actual implementation.
Write your interfaces (they will change, but will go a long way in knowing WHAT has to be done)
Write a simple program to use the interfaces (them stupid main). This goes a long way in determining the HOW it is going to be used (go back to 1 as often as needed)
Write tests on the interface (The bit I integrated from TDD, again go back to 1 as often as needed)
write the actual code behind the interfaces
write tests on the classes and the actual implementation, use a coverage tool to make sure you do not forget weid execution paths
So, yes I write tests before coding but never before I figured out what needs to be done with a certain level of details. These are usually high level tests and only treat the whole as a black box. Usually will remain as integration tests and will not change much once the interfaces have stabilized.
Then I write a bunch of tests (unit tests) on the implementation behind it, these will be much more detailed and will change often as the implementation evolves, as it get's optimized and expanded.
Is this strictly speaking TDD ? Extreme ? Agile...? whatever... ? I don't know, and frankly I don't care. It works for me. I adjust it as needs go and as my understanding of software development practice evolve.
my 2 cent
I've been programming for 20 years, and I've virtually never written a line of code that I didn't run some kind of unit test on--Honestly I know people do it all the time, but how someone can ship a line of code that hasn't had some kind of test run on it is beyond me.
Often if there is no test framework in place I just write a main() into each class I write. It adds a little cruft to your app, but someone can always delete it (or comment it out) if they want I guess. I really wish there was just a test() method in your class that would automatically compile out for release builds--I love my test method being in the same file as my code...
So I've done both Test Driven Development and Tested development. I can tell you that TDD can really help when you are a starting programmer. It helps you learn to view your code "From outside" which is one of the most important lessons a programmer can learn.
TDD also helps you get going when you are stuck. You can just write some very small piece that you know your code has to do, then run it and fix it--it gets addictive.
On the other hand, when you are adding to existing code and know pretty much exactly what you want, it's a toss-up. Your "Other code" often tests your new code in place. You still need to be sure you test each path, but you get a good coverage just by running the tests from the front-end (except for dynamic languages--for those you really should have unit tests for everything no matter what).
By the way, when I was on a fairly large Ruby/Rails project we had a very high % of test coverage. We refactored a major, central model class into two classes. It would have taken us two days, but with all the tests we had to refactor it ended up closer to two weeks. Tests are NOT completely free.
I'm not sure, but from your description I sense that there might be a misunderstanding on what test-first actually means. It does not mean that you write all your tests first. It does mean that you have a very tight cycle of
write a single, minimal test
make the test pass by writing the minimal production code necessary
write the next test that will fail
make all the existing tests pass by changing the existing production code in the simplest possible way
refactor the code (both test and production!) so that it doesn't contain duplication and is expressive
continue with 3. until you can't think of another sensible test
One cycle (3-5) typically just takes a couple of minutes. Using this technique, you actually evolve the design while you write your tests and production code in parallel. There is not much up front design involved at all.
On the question of it being "necessary" - no, it obviously isn't. There have been uncountable projects successfull without doing TDD. But there is some strong evidence out there that using TDD typically leads to significantly higher quality, often without negative impact on productivity. And it's fun, too!
Oh, and regarding it not feeling "natural", it's just a matter of what you are used to. I know people who are quite addicted to getting a green bar (the typical xUnit sign for "all tests passing") every couple of minutes.
There are so many answers now and they are all different. This perfectly resembles the reality out there. Everyone is doing it differently. I think there is a huge misunderstanding about unit testing. It seems to me as if people heard about TDD and they said it's good. Then they started to write unit tests without really understanding what TDD really is. They just got the part "oh yeah we have to write tests" and they agree with it. They also heard about this "you should write your tests first" but they do not take this serious.
I think it's because they do not understand the benefits of test-first which in turn you can only understand once you've done it this way for some time. And they always seem to find 1.000.000 excuses why they don't like writing the tests first. Because it's too difficult when figuring out how everything will fit together etc. etc. In my opinion, it's all excuses for them to hide away from their inability to once discipline themselve, try the test-first approach and start to see the benefits.
The most ridicoulous thing if they start to argue "I'm not conviced about this test-first thing but I've never done it this way" ... great ...
I wonder where unit testing originally comes from. Because if the concept really originates from TDD then it's just ridicoulous how people get it wrong.
Writing the tests first defines how your code will look like - i.e. it tends to make your code more modular and testable, so you do not create a "bloat" methods with very complex and overlapping functionality. This also helps to isolate all core functionality in separate methods for easier testing.
Personally, I believe unit tests lose a lot of their effectiveness if not done before writing the code.
The age old problem with testing is that no matter how hard we think about it, we will never come up with every possibly scenario to write a test to cover.
Obviously unit testing itself doesn't prevent this completely, as it restrictive testing, looking at only one unit of code not covering the interactions between this code and everything else, but it provides a good basis for writing clean code in the first place that should at least restrict the chances for issues of interaction between modules. I've always worked to the principle of keeping code as simple as it possibly can be - infact I believe this is one of the key principles of TDD.
So starting off with a test that basically says you can create a class of this type and build it up, in theory, writing a test for every line of code or at least covering every route through a particular piece of code. Designing as you go! Obviously based on a rough-up-front design produced initially, to give you a framework to work to.
As you say it is very unnatural to start with and can seem like a waste of time, but I've seen myself first hand that it pays off in the long run when defects stats come through and show the modules that were fully written using TDD have far lower defects over time than others.
Before, during and after.
Before is part of the spec, the contract, the definition of the work
During is when special cases, bad data, exceptions are uncovered while implementing.
After is maintenance, evolution, change, new requirements.
I don't write the actual unit tests first, but I do make a test matrix before I start coding listing all the possible scenarios that will have to be tested. I also make a list of cases that will have to be tested when a change is made to any part of the program as part of regression testing that will cover most of the basic scenarios in the application in addition to fully testing the bit of code that changed.
Remember with Extreme programming your tests effectly are you documenation. So if you don't know what you're testing, then you don't know what you want your application is going to do?
You can start off with "Stories" which might be something like
"Users can Get list of Questions"
Then as you start writing code to solve the unit tests. To solve the above you'll need at least a User and question class. So then you can start thinking about the fields:
"User Class Has Name DOB Address TelNo Locked Fields"
etc.
Hope it helps.
Crafty
Yes, if you are using true TDD principles. Otherwise, as long as you're writing the unit-tests, you're doing better than most.
In my experience, it is usually easier to write the tests before the code, because by doing it that way you give yourself a simple debugging tool to use as you write the code.
I write them at the same time. I create the skeleton code for the new class and the test class, and then I write a test for some functionality (which then helps me to see how I want the new object to be called), and implement it in the code.
Usually, I don't end up with elegant code the first time around, it's normally quite hacky. But once all the tests are working, you can refactor away until you end up with something pretty neat, tidy and proveable to be rock solid.
It helps when you are writing something that you are used writing to write first all the thing you would regularly check for and then write those features. More times then not those features are the most important for the piece of software you are writing. Now , on the other side there are not silver bullets and thing should never be followed to the letter. Developer judgment plays a big role in the decision of using test driven development versus test latter development.