When should I design and document my test cases? - unit-testing

IN SDLC, the Testing procedure should be right after implementation. However, Test-driven development encourages us to do testing while doing implementation. And in my lecture course, Prof said test cases should be part of the design.
I am a junior developer, to implement a new feature, when should I design and document my test cases?
I found that it is not so practical to test all the cases after finishing the implementation. It is because once a cases is failed, I have to change the codes and retest all cases again. Is there another way to overcome and avoid this? I know automated testes is one of the solution, but somehow, automated testes cannot stimulate all of the test cases, especially integration test cases which involves different parties.
Also, in my test cases, should I test all parts of the code? OR just test the functionality of that features request? OR it actually depends on how much time you got?
Many thanks.

Your question isn't so easy to answer, because, as you say, "it actually depends on how much time you got." Here are some opinions though:
Test after implementation: No
As a programmer, you're an expensive and scarce resource with multiple deadlines stacked up on top of each other. So effectively, this means "never test". After you've implemented one chunk of code, you will move on to the next chunk of code, and mean to come back to write tests "when you have time" (you never have time).
There are also the problems you mention. If you do all your testing after the code is written, and your tests discover something fundamentally wrong, you have to go back and fix all your code as well as all your tests.
Test while implementing: Yes
This method is actually really helpful once you get a rhythm for it. You write a bit of a class, then write a bit of a unit test, and continually revise your tests and your code until you're finished. I believe it is actually faster than writing code without tests.
It is also particularly helpful when working on a large project. Running a unit test to see if a little module is working is instantaneous. Building and loading your entire application to see if a little module is working may take several minutes. It may also interrupt your concentration (which costs at least 10 minutes).
What to test: As much as possible
100% test coverage is probably never practical. But absolutely test the critical pieces of your program, things that perform mathematical computation or lots of business logic. Test everything that's leftover as much as possible. There's no reason to test a "toString()" function, unless that happens to be critical to business logic or something.
Also, keep your tests as simple as possible, just inputs and outputs. Most of my test functions are two or three lines. If your function is hard to test because there are too many combinations, it's a sign that your function might need to be broken up a little bit. Make sure to test edge cases and "impossible" scenarios.

My experience:
Document your tests with if the code is not self-explanatory or if the case being tested is a 'tricky' corner case which is not obvious at first sight (although the code may be).
Don't create separate documents for your tests. Put everything in comments and Javadocs if you are using Java. In other words, keep this information close to the code. That's where it is needed.
About designing and implementation: just iterate. Write some implementation, then a bit of testing code for it, then more implementation code, etc... until you are done with both implementation and test code. It goes faster than writing all implementation, then testing, then rewriting failing implementation code. You can't anticipate all tests to implement at design time, it is impossible. So, no worries if you don't get it all.
If you cover more than 80% of the code you are already good, more is better. Sometimes, code can't be tested. I recommend using test coverage tools, such as Emma for Java.
OR it actually depends on how much time you got?
The time you save by not testing is never ever ever paying for the time you have to spent to solve bugs later in the project. A proper test set always pays a lot down the road, always.

Related

TDD with unclear requirements

I know that TDD helps a lot and I like this method of development when you first create a test and then implement the functionality. It is very clear and correct way.
But due to some flavour of my projects it often happens that when I start to develop some module I know very little about what I want and how it will look at the end. The requirements appear as I develop, there may be 2 or 3 iterations when I delete all or part of the old code and write new.
I see two problems:
1. I want to see the result as soon as possible to understand are my ideas right or wrong. Unit tests slow down this process. So it often happens that I write unit tests after the code is finished what is known to be a bad pattern.
2. If I first write the tests I need to rewrite not only the code twice or more times but also the tests. It takes much time.
Could someone please tell me how can TDD be applied in such situation?
Thanks in advance!
I want to see the result as soon as possible to understand are my ideas right or wrong. Unit tests slow down this process.
I disagree. Unit tests and TDD can often speed up getting results because they force you to concentrate on the results rather than implementing tons of code that you might never need. It also allows you to run the different parts of your code as you write them so you can constantly see what results you are getting, rather than having to wait until your entire program is finished.
I find that TDD works particularly well in this kind of situation; in fact, I would say that having unclear and/or changing requirements is actually very common.
I find that the best uses of TDD is ensuring that your code is doing what you expect it to do. When you're writing any code, you should know what you want it to do, whether the requirements are clear or not. The strength of TDD here is that if there is a change in the requirements, you can simply change one or more of your unit tests to reflect the changed requirements, and then update your code while being sure that you're not breaking other (unchanged) functionality.
I think that one thing that trips up a lot of people with TDD is the assumption that all tests need to be written ahead of time. I think it's more effective to use the rule of thumb that you never write any implementation code while all of your tests are passing; this simply ensures that all code is covered, while also ensuring that you're checking that all code does what you want it to do without worrying about writing all your tests up front.
IMHO, your main problem is when you have to delete some code. This is waste and this is what shall be addressed first.
Perhaps you could prototype, or utilize "spike solutions" to validate the requirements and your ideas then apply TDD on the real code, once the requirements are stable.
The risk is to apply this and to have to ship the prototype.
Also you could test-drive the "sunny path" first and only implement the remaining such as error handling ... after the requirements have been pinned down. However the second phase of the implementation will be less motivating.
What development process are you using ? It sounds agile as you're having iterations, but not in an environment that fully supports it.
TDD will, for just about anybody, slow down initial development. So, if initial development speed is 10 on a 1-10 scale, with TDD you might get around an 8 if you're proficient.
It's the development after that point that gets interesting. As projects get larger, development efficiency typically drops - often to 3 on the same scale. With TDD, it's very possible to still stay in the 7-8 range.
Look up "technical debt" for a good read. As far as I'm concerned, any code without unit tests is effectively technical debt.
TDD helps you to express the intent of your code. This means that writing the test, you have to say what you expect from your code. How your expectations are fulfilled is then secondary (this is the implementation). Ask yourself the question: "What is more important, the implementation, or what the provided functionality is?" If it is the implementation, then you don't have to write the tests. If it is the functionality provided then writing the tests first will help you with this.
Another valuable thing is that by TDD, you will not implement functionality that will not be needed. You only write code that needs to satisfy the intent. This is also called YAGNI (You aint gonna need it).
There's no getting away from it - if you're measuring how long it takes to code just by how long it takes you to write classes, etc, then it'll take longer with TDD. If you're experienced it'll add about 15%, if you're new it'll take at least 60% longer if not more.
BUT, overall you'll be quicker. Why?
by writing a test first you're specifying what you want and delivering just that and nothing more - hence saving time writing unused code
without tests, you might think that the results are so obvious that what you've done is correct - when it isn't. Tests demonstrate that what you've done is correct.
you will get faster feedback from automated tests than by doing manual testing
with manual testing the time taken to test everything as your application grows increases rapidly - which means you'll stop doing it
with manual tests it's easy to make mistakes and 'see' something passing when it isn't, this is especially true if you're running them again and again and again
(good) unit tests give you a second client to your code which often highlights design problems that you might miss otherwise
Add all this up and if you measure from inception to delivery and TDD is much, much faster - you get fewer defects, you're taking fewer risks, you progress at a steady rate (which makes estimation easier) and the list goes on.
TDD will make you faster, no question, but it isn't easy and you should allow yourself some space to learn and not get disheartened if initially it seems slower.
Finally you should look at some techniques from BDD to enhance what you're doing with TDD. Begin with the feature you want to implement and drive down into the code from there by pulling out stories and then scenarios. Concentrate on implementing your solution scenario by scenario in thin vertical slices. Doing this will help clarify the requirements.
Using TDD could actually make you write code faster - not being able to write a test for a specific scenario could mean that there is an issue in the requirements.
When you TDD you should find these problematic places faster instead of after writing 80% of your code.
There are a few things you can do to make your tests more resistant to change:
You should try to reuse code inside
your tests in a form of factory
methods that creates your test
objects along with verify methods
that checks the test result. This
way if some major behavior change
occurs in your code you have less
code to change in your test.
Use IoC container instead of passing
arguments to your main classes -
again if the method signature
changes you do not need to change
all of your tests.
Make your unit tests short and Isolated - each test should check only one aspect of your code and use Mocking/Isolation framework to make the test independent of external objects.
Test and write code for only the required feature (YAGNI). Try to ask yourself what value my customer will receive from the code I'm writing. Don't create overcomplicated architecture instead create the needed functionality piece by piece while refactoring your code as you go.
Here's a blog post I found potent in explaining the use of TDD on a very iterative design process scale: http://blog.extracheese.org/2009/11/how_i_started_tdd.html.
Joshua Block commented on something similar in the book "Coders at work". His advice was to write examples of how the API would be used (about a page in length). Then think about the examples and the API a lot and refactor the API. Then write the specification and the unit tests. Be prepared, however, to refactor the API and rewrite the spec as you implement the API.
When I deal with unclear requirements, I know that my code will need to change. Having solid tests helps me feel more comfortable changing my code. Practising TDD helps me write solid tests, and so that's why I do it.
Although TDD is primarily a design technique, it has one great benefit in your situation: it encourages the programmer to consider details and concrete scenarios. When I do this, I notice that I find gaps or misunderstandings or lack of clarity in requirements quite quickly. The act of trying to write tests forces me to deal with the lack of clarity in the requirements, rather than trying to sweep those difficulties under the rug.
So when I have unclear requirements, I practise TDD both because it helps me identify the specific requirements issues that I need to address, but also because it encourages me to write code that I find easier to change as I understand more about what I need to build.
In this early prototype-phase I find it to be enough to write testable code. That is, when you write your code, think of how to make it possible to test, but for now, focus on the code itself and not the tests.
You should have the tests in place when you commit something though.

How has unit testing made your life better?

Ok let me be honest, I haven't written more than 10 unit tests in my life probably.
I am embarking on a new project, and being the sole programmer means I should be scared ... very scared.
The idea that I can pseudo guarantee that my software works brings about a sense of joy.
Sure I will miss a ton of cases where I should have tested, but that's where I will learn as time goes on.
Unit testing will help me sleep better at night, which is better for my health.
My code will fail, but at least I will have a better idea when it will.
How has unit testing made your life better (or has it?), despite the rest of your team not jumping on the bandwagon?
The far biggest value that unit test have on my project is confidence. With that confidence it's much easier to add new features that weren't planned at the beginning and to tear code apart to change something or turn this around.
With test I know I (or anyone else!) haven't broke something that already worked.
Without test you are brave (or stupid) when you make big changes and deploy them in production in the next minute.
By unit testing, I've reduced the number of "stupid-bugs" that get reported during the testing stage. It's also given me a higher level of confidence in my code.
As Elie said, unit-testing is a great way to flush out "stupid" or simple bugs very early.
For me, it changes the way I think about code; making my code testable makes it less brittle and more flexible (e.g. depenceny injection/inversion of control came quite naturally to me because it's something I did for testing purposes anyway).
The greatest benefit I personally get from a thorough test-suite is the confidence to change complicated code even months after I wrote it without fear of accidentally breaking something.
I'm not yet there, but with some discipline while writing them, unit-tests are a great way to document your code, also.
Okay, as nobody else has stepped up to the plate to play devil's advocate, I'll do it.
Automated unit testing can have good benefits for some projects, but can also have many of the following issues:
It sucks down a ton of engineering resources.
It can take many man-hours to remove environment and setup issues from the equation.
It has a low ROI for some project types, especially GUIs.
It won't catch many errors, because it's impossible to evaluate all execution paths for all but the most trivial programs.
It doesn't catch integration errors.
It doesn't catch broader system-level errors such as functions performed across multiple units, or non-functional areas such as performance.
Test coverage and coverage gates can become an increasingly useless mantra
It requires a sustainable process for ensuring that test case failures are reviewed and addressed immediately. Otherwise the app will evolve out of sync with the unit test suite.
It has a significant opportunity cost - there are other activities such as code reviews that have an equally good or even better ROI.
It can involve a significant culture change.
So developers shouldn't adopt a dogmatic (yes or no) approach towards unit testing, but instead do an ROI calculation for every project.
Using unit testing in conjunction with TDD provides me with motivation and drive to complete the task at hand. Without the small progress of writing test, fixing test, writing test, fixing test I can become unmotivated.
Unit tests are really helpful when you start debugging as well. If your tests fail, then you know where the bug is almost immediately. If they all run, then you know where the bugs aren't (most of the time).
Another area where unit tests help: migrating software. I'm finding that it's a lot easier to prepare Python code that has unit tests to be migrated to Python 3 than code that doesn't have unit tests.
Sure I will miss a ton of cases where I should have tested, but that's where I will learn as time goes on.
This is where Test Driven Development really shines. You don't have to worry quite as much about having the proper tests because that question will be answered for you beforehand.
Of course, just to make sure we're on the same page, "test driven development" means the process of coding where you write the test, verify that the test fails, and then write the code.
For me, it wasn't just unit tests that changed my life, but Test Driven Development (TDD). I liken it to a religious experience in my blog post (shameless, I know) My Year with TDD.
Getting into testing has been a career changing experience for me. I write less bugs, I write more readable code, I write more cohesive code, I know when something is broken (usually), etc, etc. I owe it all to Test Driven Development.
Try it, you like it :)
I've been a TDD(eveloper) since my 2nd* project at uni (back in 1988). I don't know if the term was even in use then.
The best thing is the ability to change things and very quickly check you haven't broken anything else. Easy regression testing.
They are good documentation of object/method usage as well.
*and that was directly because of how the first project went....
the biggest thing unit tests give you is confidence in your code. You know stuff works at a certain level of quality, and you know you can go in and change or rewrite something and not have things fail all over the place where you dont expect to. Verification is only a testrun away.
Now that I foster to test-drive my code, I know when I'm done with a function, a component, or a feature. Therefore I can report progress accurately.
I know it's not bug free code, but it is functional enough to be integrated, built and delivered to QA. I'm confident they'll be able to start testing without being blocked by a segmentation error, or any other silly problem.
I also have an environment ready so that I can quickly write a new test to reproduce any problem that will be reported, and I have a safety net to detect side effects and regressions when I modify or fix the code.
Can't speak for everyone, but I started writing tests because of a fellow developer that wrote tests that were a big help to me when I was learning the system.
I've also found that tests are a good way to verify assumptions when working with a new code base.
Unit testing is not a silver bullet. But it does have benefits, and it is very satisfying.
I find it means my code gets executed a lot earlier, since before I wrote unit tests, it took a lot of coding before there was enough functionality to try out in the application.
I'm also very grateful for my suite of unit tests when I come to refactor. I rewrote a whole date handling module a while ago, and there's no way I would have had the confidence to do that without regression tests.
I must say that I think the improvements in VS 2010, such as Ctrl+Enter (I think that was what it was) that can allow you to quickly stub the interface of a class while writing test (first) is going to make this ALOT easier for me.
Unit test advantages are multiple. To be more specific of how it makes my life better, well, I think that it's increase the confidence in change and give me the possibility to change a lot faster a code later. It increase my life in the long run.
Of course, I can tell that in short time that it's a little bit of more pain because it requires more time, but it's rapidly forgotten when you auto-validate yourself when you do these test.
I'm a big fan of unit testing, though my tests don't provide complete coverage nowadays... mostly because I'm working on a web site and much of my code just grabs data from the database, manipulates it, and spits it out. The manipulation code is generally well tested, but its a real pain to test the database code.
That being said, I can point out a case where unit tests saved me weeks of work...
I was working on a smallish project (4-6 devs) a while back and, after months of work we had reached a state of near completion. At this point, the folks in charge of the product decided that, instead of storing dates (and generating reports using them) in GMT, they wanted everything in EST. Given the product was built to handle large amounts of data/logs and generate information about that data based on timeframes, this was a fairly major change.
Over the course of the next few days, the development team went in and changed everything to deal with EST timestamps. What would have taken us weeks to do had we not had such extensive automated tests, took us just 3 days, allowing us to meet an aggressive schedule. We were able to jump into the code and start changing whatever we needed to; the unit tests giving us courage by knowing the system would complain quickly if we broken something. To this day, I use that experience as an example of how you can never truly understand the benefits of automated testing until it saves you... and it certainly did that for my team.
I'm currently in the process of trying to jump on the bandwagon. Work mates are already doing it before they've written a line of functional code. I'm still writing a full program before I've even run it through the main, let alone unit test it :/
I'll get there in the end I'm sure. But at the moment, I am of ill health through spending 90% of my life debugging :(
I've done quite a bit of work in java, and a year in Ruby.
In Ruby we used extensive testing (TDD). This was ABSOLUTELY REQUIRED. You can write nearly complete garbage into a Ruby file, and if you don't hit that specific line of code, you'll never know, so your tests need near 100% coverage.
In Java, I've never needed much more than a single, simple success path test--and that can usually be thrown away after the code runs. It's really the static type checking, strong typing and using coding patterns like strong encapsulation and parameter checking that makes this possible. You can actually get very close to proving that a small class can't be broken (is bug free) without tests, and when correctly designed, all classes should be small.
Another point of interest: On the Ruby project, we had a refactor that took us 2 days of real code work (split a prime model class into two classes) and 2 weeks of test repairs.
At some point all those tests have a price, they are still code you have to maintain.
That said, I find TDD fun and a good way to get things started, even in Java, and I also would reiterate that I ALWAYS have some success path testing at the very least (even if it's just a quick main method) in virtually every class I write.
Good unit tests that provide sufficient coverage can make you sleep better at night.
If you use assertions, you can find out potential bugs which are missed by the unit tests (sometimes it's not good enough perhaps), and you can sleep even better at night.
It saves me time, because when I run code TDD, it usually just works when it comes to integration time, so no need to spend agonizing hours debugging.
It also gives me confidence having a conversation with other developers claiming that API I created has bugs.
When you get to some critical mass of tests, a nice side-effect is often that if you are about to introduce a bug, it is likely to make some test fail, even if the tests aren't directly against the new, buggy code you are writing.
So you will be alerted to the bug you are about to commit before doing so, and you can then write tests against it and fix it right away.
(This is of course only true if you have tests that are not "just" very narrow test-one-thing-only tests, but I think that is more often the case than not.)

Writing Unit Tests Later

I know that TDD style is writing the test first, see it fails then go and make it green, which is good stuff. Sometimes it really works for me.
However especially when I was experimenting with some stuff (i.e. not sure about the design, not sure if it's going to work) or frantically writing code, I don't want to write unit tests, it breaks my flow.
I tend to write unit tests later on and especially just before stuff getting too complicated. Also there is another problem writing them later is generally more boring.
I'm not quite sure if this is a good approach (definitely not the best).
What do you think? Do you code write your unit tests later? Or how do you deal this flow problem or experimental design / code stage.
What I've learned is that there is no experimental code, at least not working in production environments and/or tight deadlines. Experiments are generally carried out until something "works" at which point that becomes the production code.
The other side of this is that TDD from the start will result in better design of your code. You'll be thinking more about it, reworking it, refactoring it more frequently than if you write the tests after the fact.
I've written tests after the fact. Better late then never. They are always worth having.
However, I have to say, the first time I wrote them before writing the tested code, it was extremely satisfying. No more fiddling around with manual testing. I was surprised just how good it felt.
Also, I tend to write unit tests before refactoring legacy code - which, almost by definition, means that I'm writing tests to test code that's already written. Provides a security blanket that makes me more comfortable with getting into big blocks of code written by others.
"I'm not quite sure if this is a good approach (definitely not the best)."
Not good? Why not?
Are you designing for testability? In that case, your design is test-driven. What more can anyone ask for?
Whether the tests come first, in the middle or last doesn't matter as much as designing for testability. In the end, changes to the design will make tests fail, and you can fix things. Changes to the tests in anticipation of design changes will make the tests fail, also. Both are fine.
If you get to the end of your design work, and there's something hard to test in the middle, then you failed to do TDD. You'll have to refactor your design to make it testable.
I often take the same approach you're talking about. What seems to work well is to treat the exerimental code exactly as such, and then start a proper design based on what you've learned. From here you can write your tests first. Otherwise, you're left with lots of code that was written as temporary or experimental, and probably won't get around to writing tests for all of it.
I would say that for normal development, TDD works extremely well. There are cases where you may not need to write the tests first (or even at all), but those are infrequent. Sometimes, however, you need to do some exploration to see what will work. I would consider this to be a "spike", and I don't necessarily think that TDD is absolutely necessary in this case. I would probably not use the actual "spike" code in my project. After all, it was just an exploration and now that I have a better idea of how it ought to work, I can probably write better code (and tests) than my "spike" code. If I did decide to use my "spike" code, I'd probably go back and write tests for it.
Now, if you find that you've violated TDD and written some production code before your tests - it happens - then, too, I'd go back and write the tests. In fact, on the occasions where this has happened to me I've often found things that I've neglected once I start writing the tests because more tests come to mind that aren't handled by the code. Eventually, you get back in the TDD rythym (and vow never to do that again).
Consider the psychological tendencies associated with sunk cost. That is, when you get to the second part of the equation, that laziness gene we all have makes us want to protect the work we have already done. The consequences?
If you write the tests first...
You tend to write the code to fit the tests. This encourages the "simplest thing that solves the problem" type development and keeps you focused on solving the problem not working on meta-problems.
If you write the code first...
You will be tempted to write the tests to fit the code. In effect this is the equivalent of writing the problem to fit your answer, which is kind of backwards and will quite often lead to tests that are of lesser value.
Although I'd be surprised if 1 programmer out of 50 ALWAYS writes tests first, I'd still argue that it is something to strive for if you want to write good software.
I usually write my tests first but sometime while experimenting I write the code after. Once I get an idea of what my code is supposed to do, I stop the code and start the tests.
Writing the code first is natural when you're trying to figure out how your code is going to work. Writing the test first helps you determine what your code show do (not how it should do it). If you're writing the code first, you're trying to solve the problem without completely defining the problem. This isn't necessarily "bad", but you are using unit tests as a regression tool rather than a development tool (again, not "bad" - just not TDD).
VS 2008 has a nice feature that will generate test classes for an object, the tests needs to me tweaked but it dose a lot of the grunt work for you. Its really nice for crating tests for your less then diligent co-workers.
Another good point for this is it help to prevent you from missing something, expectantly when your working on code that isn't yours.
if your using a different testing framework then MSUnitTest, it's fairly simple to convert then tests from MSUnit to Nunit, etc. just do some copy and past.
I would like to say that I always write Unit tests first but of course I don't (for numerous reasons well known to any real programmer :-)). What I (ok, also not always...) do is to convert every bug which takes me more than five minutes to find into a unit test. Even before I fix it. This has the following advantages:
It documents the bug and alerts me if it shows up again at a later point of time.
It helps in finding the bug, since I have a well-defined place to put debugging code into (setting up my data structures, call the right methods, set breakpoints on etc.) Before I discovered unit testing I modified the main() function for this testing code resulting in strange results when I forgot to remove it afterwards ...
Usually it gives me good ideas what else could go wrong, so it quite often evolves in a whole bunch of unit tests and resulting in more than one bug getting discovered resp. fixed.

Should unit tests be written before the code is written?

I know that one of the defining principles of Test driven development is that you write your Unit tests first and then write code to pass those unit tests, but is it necessary to do it this way?
I've found that I often don't know what I am testing until I've written it, mainly because the past couple of projects I've worked on have more evolved from a proof of concept rather than been designed.
I've tried to write my unit tests before and it can be useful, but it doesn't seem natural to me.
Some good comments here, but I think that one thing is getting ignored.
writing tests first drives your design. This is an important step. If you write the tests "at the same time" or "soon after" you might be missing some design benefits of doing TDD in micro steps.
It feels really cheesy at first, but it's amazing to watch things unfold before your eyes into a design that you didn't think of originally. I've seen it happen.
TDD is hard, and it's not for everybody. But if you already embrace unit testing, then try it out for a month and see what it does to your design and productivity.
You spend less time in the debugger and more time thinking about outside-in design. Those are two gigantic pluses in my book.
There have been studies that show that unit tests written after the code has been written are better tests. The caveat though is that people don't tend to write them after the event. So TDD is a good compromise as at least the tests get written.
So if you write tests after you have written code, good for you, I'd suggest you stick at it.
I tend to find that I do a mixture. The more I understand the requirements, the more tests I can write up front. When the requirements - or my understanding of the problem - are weak, I tend to write tests afterwards.
TDD is not about the tests, but how the tests drive your code.
So basically you are writing tests to let an architecture evolve naturally (and don't forget to refactor !!! otherwise you won't get much benefit out of it).
That you have an arsenal of regression tests and executable documentation afterwards is a nice sideeffect, but not the main reason behind TDD.
So my vote is:
Test first
PS: And no, that doesn't mean that you don't have to plan your architecture before, but that you might rethink it if the tests tell you to do so !!!!
I've lead development teams for the past 6-7 years. What I can tell for sure is that as a developer and the developers I have worked with, it makes a phenomenal difference in the quality of the code if we know where our code fits into the big picture.
Test Driven Development (TDD) helps us answer "What?" before we answer "How?" and it makes a big difference.
I understand why there may be apprehensions about not following it in PoC type of development/architect work. And you are right it may not make a complete sense to follow this process. At the same time, I would like to emphasize that TDD is a process that falls in the Development Phase (I know it sounds obsolete, but you get the point :) when the low level specification are clear.
I think writing the test first helps define what the code should actually do. Too many times people don't have a good definition of what the code is supposed to do or how it should work. They simply start writing and make it up as they go along. Creating the test first makes you focus on what the code will do.
Not always, but I find that it really does help when I do.
I tend to write them as I write my code. At most I will write the tests for if the class/module exists before I write it.
I don't plan far enough ahead in that much detail to write a test earlier than the code it is going to test.
I don't know if this is a flaw in my thinking or method's or just TIMTOWTDI.
I start with how I would like to call my "unit" and make it compile.
like:
picker = Pick.new
item=picker.pick('a')
assert item
then I create
class Pick
def pick(something)
return nil
end
end
then I keep on using the Pick in my "test" case so I could see how I would like it to be called and how I would treat different kinds of behavior. Whenever I realize I could have trouble on some boundaries or some kind of error/exception I try to get it to fire and get an new test case.
So, in short. Yes.
The ratio doing test before is a lot higher than not doing it.
Directives are suggestion on how you could do things to improve the overall quality or productivity or even both of the end product. They are in no ways laws to be obeyed less you get smitten in a flash by the god of proper coding practice.
Here's my compromise on the take and I found it quite useful and productive.
Usually the hardest part to get right are the requirements and right behind it the usability of your class, API, package... Then is the actual implementation.
Write your interfaces (they will change, but will go a long way in knowing WHAT has to be done)
Write a simple program to use the interfaces (them stupid main). This goes a long way in determining the HOW it is going to be used (go back to 1 as often as needed)
Write tests on the interface (The bit I integrated from TDD, again go back to 1 as often as needed)
write the actual code behind the interfaces
write tests on the classes and the actual implementation, use a coverage tool to make sure you do not forget weid execution paths
So, yes I write tests before coding but never before I figured out what needs to be done with a certain level of details. These are usually high level tests and only treat the whole as a black box. Usually will remain as integration tests and will not change much once the interfaces have stabilized.
Then I write a bunch of tests (unit tests) on the implementation behind it, these will be much more detailed and will change often as the implementation evolves, as it get's optimized and expanded.
Is this strictly speaking TDD ? Extreme ? Agile...? whatever... ? I don't know, and frankly I don't care. It works for me. I adjust it as needs go and as my understanding of software development practice evolve.
my 2 cent
I've been programming for 20 years, and I've virtually never written a line of code that I didn't run some kind of unit test on--Honestly I know people do it all the time, but how someone can ship a line of code that hasn't had some kind of test run on it is beyond me.
Often if there is no test framework in place I just write a main() into each class I write. It adds a little cruft to your app, but someone can always delete it (or comment it out) if they want I guess. I really wish there was just a test() method in your class that would automatically compile out for release builds--I love my test method being in the same file as my code...
So I've done both Test Driven Development and Tested development. I can tell you that TDD can really help when you are a starting programmer. It helps you learn to view your code "From outside" which is one of the most important lessons a programmer can learn.
TDD also helps you get going when you are stuck. You can just write some very small piece that you know your code has to do, then run it and fix it--it gets addictive.
On the other hand, when you are adding to existing code and know pretty much exactly what you want, it's a toss-up. Your "Other code" often tests your new code in place. You still need to be sure you test each path, but you get a good coverage just by running the tests from the front-end (except for dynamic languages--for those you really should have unit tests for everything no matter what).
By the way, when I was on a fairly large Ruby/Rails project we had a very high % of test coverage. We refactored a major, central model class into two classes. It would have taken us two days, but with all the tests we had to refactor it ended up closer to two weeks. Tests are NOT completely free.
I'm not sure, but from your description I sense that there might be a misunderstanding on what test-first actually means. It does not mean that you write all your tests first. It does mean that you have a very tight cycle of
write a single, minimal test
make the test pass by writing the minimal production code necessary
write the next test that will fail
make all the existing tests pass by changing the existing production code in the simplest possible way
refactor the code (both test and production!) so that it doesn't contain duplication and is expressive
continue with 3. until you can't think of another sensible test
One cycle (3-5) typically just takes a couple of minutes. Using this technique, you actually evolve the design while you write your tests and production code in parallel. There is not much up front design involved at all.
On the question of it being "necessary" - no, it obviously isn't. There have been uncountable projects successfull without doing TDD. But there is some strong evidence out there that using TDD typically leads to significantly higher quality, often without negative impact on productivity. And it's fun, too!
Oh, and regarding it not feeling "natural", it's just a matter of what you are used to. I know people who are quite addicted to getting a green bar (the typical xUnit sign for "all tests passing") every couple of minutes.
There are so many answers now and they are all different. This perfectly resembles the reality out there. Everyone is doing it differently. I think there is a huge misunderstanding about unit testing. It seems to me as if people heard about TDD and they said it's good. Then they started to write unit tests without really understanding what TDD really is. They just got the part "oh yeah we have to write tests" and they agree with it. They also heard about this "you should write your tests first" but they do not take this serious.
I think it's because they do not understand the benefits of test-first which in turn you can only understand once you've done it this way for some time. And they always seem to find 1.000.000 excuses why they don't like writing the tests first. Because it's too difficult when figuring out how everything will fit together etc. etc. In my opinion, it's all excuses for them to hide away from their inability to once discipline themselve, try the test-first approach and start to see the benefits.
The most ridicoulous thing if they start to argue "I'm not conviced about this test-first thing but I've never done it this way" ... great ...
I wonder where unit testing originally comes from. Because if the concept really originates from TDD then it's just ridicoulous how people get it wrong.
Writing the tests first defines how your code will look like - i.e. it tends to make your code more modular and testable, so you do not create a "bloat" methods with very complex and overlapping functionality. This also helps to isolate all core functionality in separate methods for easier testing.
Personally, I believe unit tests lose a lot of their effectiveness if not done before writing the code.
The age old problem with testing is that no matter how hard we think about it, we will never come up with every possibly scenario to write a test to cover.
Obviously unit testing itself doesn't prevent this completely, as it restrictive testing, looking at only one unit of code not covering the interactions between this code and everything else, but it provides a good basis for writing clean code in the first place that should at least restrict the chances for issues of interaction between modules. I've always worked to the principle of keeping code as simple as it possibly can be - infact I believe this is one of the key principles of TDD.
So starting off with a test that basically says you can create a class of this type and build it up, in theory, writing a test for every line of code or at least covering every route through a particular piece of code. Designing as you go! Obviously based on a rough-up-front design produced initially, to give you a framework to work to.
As you say it is very unnatural to start with and can seem like a waste of time, but I've seen myself first hand that it pays off in the long run when defects stats come through and show the modules that were fully written using TDD have far lower defects over time than others.
Before, during and after.
Before is part of the spec, the contract, the definition of the work
During is when special cases, bad data, exceptions are uncovered while implementing.
After is maintenance, evolution, change, new requirements.
I don't write the actual unit tests first, but I do make a test matrix before I start coding listing all the possible scenarios that will have to be tested. I also make a list of cases that will have to be tested when a change is made to any part of the program as part of regression testing that will cover most of the basic scenarios in the application in addition to fully testing the bit of code that changed.
Remember with Extreme programming your tests effectly are you documenation. So if you don't know what you're testing, then you don't know what you want your application is going to do?
You can start off with "Stories" which might be something like
"Users can Get list of Questions"
Then as you start writing code to solve the unit tests. To solve the above you'll need at least a User and question class. So then you can start thinking about the fields:
"User Class Has Name DOB Address TelNo Locked Fields"
etc.
Hope it helps.
Crafty
Yes, if you are using true TDD principles. Otherwise, as long as you're writing the unit-tests, you're doing better than most.
In my experience, it is usually easier to write the tests before the code, because by doing it that way you give yourself a simple debugging tool to use as you write the code.
I write them at the same time. I create the skeleton code for the new class and the test class, and then I write a test for some functionality (which then helps me to see how I want the new object to be called), and implement it in the code.
Usually, I don't end up with elegant code the first time around, it's normally quite hacky. But once all the tests are working, you can refactor away until you end up with something pretty neat, tidy and proveable to be rock solid.
It helps when you are writing something that you are used writing to write first all the thing you would regularly check for and then write those features. More times then not those features are the most important for the piece of software you are writing. Now , on the other side there are not silver bullets and thing should never be followed to the letter. Developer judgment plays a big role in the decision of using test driven development versus test latter development.

How is unit testing better than just testing the entire output of your application as a whole?

I don't understand how an unit test could possibly benefit.
Isn't it sufficient for a tester to test the entire output as a whole rather than doing unit tests?
Thanks.
What you are describing is integration testing. What integration testing will not tell you is which piece of your massive application is not working correctly when your output is no longer correct.
The advantage to unit testing is that you can write a test for each business assumption or algorithm step that you need your program to perform. When someone adds or changes code to your application, you immediately know exactly which step, which piece, and maybe even which line of code is broken when a bug is introduced. The time savings on maintenence for that reason alone makes it worthwhile, but there is an even bigger advantage in that regression bugs cannot be introduced (assuming your tests are running automatically when you build your software). If you fix a bug, and then write a test specifically to catch that bug in the future, there is no way someone could accidentally introduce it again.
The combination of integration testing and unit testing can let you sleep much easier at night, especially when you've checked in a big piece of code that day.
The earlier you catch bugs, the cheaper they are to fix. A bug found during unit testing by the coder is pretty cheap (just fix the darn thing).
A bug found during system or integration testing costs more, since you have to fix it and restart the test cycle.
A bug found by your customer will cost a lot: recoding, retesting, repackaging and so forth. It may also leave a painful boot print on your derriere when you inform management that you didn't catch it during unit testing because you didn't do any, thinking that the system testers would find all the problems :-)
How much money would it cost GM to recall 10,000 cars because the catalytic converter didn't work properly?
Now think of how much it would cost them if they discovered that immediately after those converters were delivered to them, but before they were put into those 10,000 cars.
I think you'll find the latter option to be quite a bit cheaper.
That's one reason why test driven development and continuous integration are (sometimes) a good thing - testing is done all the time.
In addition, unit tests don't check that the program works as a whole, just that each little bit performs as expected. That's often quite a lot more than higher level tests would check.
From my experience:
Integration and functional testing tend to be more indicative of the overall quality of the system, than unit test suit is.
High level testing (functional, acceptance) is a QA tool.
Unit testing is a development tool. Especially in a TDD context, where unit test becomes more of a design implement, rather than that of a quality assurance.
As a result of better design, quality of the entire system improves (indirectly).
Passing unit test suite is meant to ensure that a single component conforms to the developer's intentions (correctness). Acceptance test is the level that covers validity of the system (i.e. system does what user want it to do).
Summary:
Unit test is meant as a development tool first, QA tool second.
Acceptance test is meant as a QA tool.
There is still a need for a certain level of manual testing to be performed but unit testing is used to decrease the number of defects that make it to that stage. Unit testing tests the smallest parts of the system and if they all work the chances of the application as a whole working correctly are increased significantly.
It also assists when adding new features since regression testing can be performed quickly and automatically.
For a complex enough application, testing the entire output as a whole may not cover enough different possibilities. For example, any given application has a huge number of different code paths that can be followed depending on input. In typical testing, there may be many parts of your code that are simply never encountered, because they are only used in certain circumstances, so you can't be sure that any code that isn't run in your test situation, actually works. Also, errors in one section of code may be masked a majority of the time by something else in another section of code, so you may never discover some errors.
It is better to test each function or class separately. That way, the test is easier to write, because you are only testing a certain small section of the code. It's also easier to cover every possible code path when testing, and if you test each small part separately then you can detect errors even when those errors would often be masked by other parts of your code when run in your application.
Do yourself a favor and try out unit testing first. I was quite the skeptic myself until I realized just how darned helpful/powerful unit-tests can be. If you think about it, they aren't really there to add to your workload. They are there to provide you with peace of mind and allow you to continue extending your application while ensuring that your code is solid. You get immediate feedback as to when you may have broke something and this is something of extraordinary value.
To your question regarding why to test small sections of code consider this: Suppose your giant app uses a cool XOR encryption scheme that you wrote and eventually product management changes the requirements of how you generate these encrypted strings. So you say: "Heck, I wrote the the encryption routine so I'll go ahead and make the change. It'll take me 15 minutes and we'll all go home and have a party." Well, perhaps you introduced a bug during this process. But wait!!! Your handy dandy TestXOREncryption() test method immediately tells you that the expected output did not match the input. Bingo, this is why you broke down your unit tests ahead of time into small "units" to test for because in your big giant application you would not have figured this out nearly as fast.
Also, once you get into the frame of mind of regularly writing unit tests you'll realize that although you pay an upfront cost in the beginning in terms of time, you'll get that back 10 fold later in the development cycle when you can quickly identify areas in your code that have introduced problems.
There is no magic bullet with unit tests because your ability to identify problems is only as good as the tests you write. It boils down to delivering a better product and relieving yourself of stress and headaches. =)
Agree with most of the answers. Let's drill down on the topic of speed. Here are some real numbers:
Unit test results in 1 or 2 minutes from a
fresh compile. As true unit tests
(no interaction with external
systems like dbs) they can cover a
lot of logic really fast.
Automated functional test results in 1 or 2 hours. These run on a simplified platform, but sometimes cover multiple systems and the database - which really kills the speed.
Automated integration test results once a day. These exercise the full meal deal, but are so heavy and slow, we can only execute them once a day and it takes a few hours.
Manual regression results come in after a few weeks. We get stuff over to testers a few times a day, but your change isn't realistically regressed for week or two at best.
I want to find out what I broke in 1 or 2 minutes, not a few weeks, not even a few hours. That's where the 10fold ROI on unit tests that people talk about comes from.
This is a tough question to approach because it questions something of such enormous breadth. Here's my short answer, however:
Test Driven Development (or TDD) seeks to prove that every logical unit of an application (or block of code) functions exactly as it should. By making tests as automated as possible for productivity's sake, how could this really be harmful?
By testing every logical piece of code, you can trust the usage of the code up some hierarchy. Say I build an application that relies on a thread-safe stack implementation. Shouldn't the stack be guaranteed to work up at every stage before I build on it?
The key is that if something in the whole application breaks, meaning just looking at the total output/outcome, how do you know where it came from? Well, debugging, of course! Which puts you back where you started. TDD allows you to -hopefully- bypass this most painful stage in development.
Testers generally test end to end functionality. Obviously this is geared for going at user scenarios and has incredible value.
Unit Tests serve a different functionality. The are the developers way of verifying the components they write work correctly in the absence of other features or in combination with other features. This offers a range of value including
Provides un-ignorable documentation
Ability to isolate bugs to specific components
Verify invariants in the code
Provide quick, immediate feedback to changes in the code base.
One place to start is regression testing. Once you find a bug, write a small test that demonstrates the bug, fix it, then make sure the test now passes. In future you can run that test before each release to ensure that the bug has not been reintroduced.
Why do that at a unit level instead of a whole-program level? Speed. In good code it's much faster to isolate a small unit and write a tiny test than to drive a complex program through to the bug point. Then when testing a unit test will generally run significantly faster than an integration test.
Very simply: Unit tests are easier to write, since you're only testing a single method's functionality. And bugs are easier to fix, since you know exactly what method is broken.
But like the other answerers have pointed out, unit tests aren't the end-all-be-all of testing. They're just the smallest piece of the equation.
Probably the single biggest difficulty with software is the sheer number of interacting things, and the most useful technique is to reduce the number of things that have to be considered.
For example, using higher-level languages rather than lower-level improves productivity, because one line is a separate thing, and being able to write a program in fewer lines reduces the number of things.
Procedural programming came about as an attempt to reduce complexity by making it possible to treat a function as a thing. In order to do that, though, we have to be able to think about what the function does in a coherent manner, and with confidence that we're right. (Object-oriented programming does a similar thing, on a larger scale.)
There are several ways to do this. Design-by-contract is a way of exactly specifying what the function does. Using function parameters rather than global variables to call the function and get results reduces the complexity of the function.
Unit testing is one way to verify that the function does what it is supposed to. It's usually possible to test all the code in a function, and sometimes all the execution paths. It is a way to tell if the function works as it should or not. If the function works, we can think about it as a single thing, rather than as multiple things we have to keep track of.
It serves other purposes. Unit tests are usually quick to run, and so can catch bugs quickly, when they're easy to fix. If developers make sure a function passes the tests before being checked in, then the tests are a form of documenting what the function does that is guaranteed correct. The act of creating the tests forces the test writer to think about what the function should be doing. After that, whoever wanted the change can look at the tests to see if he or she was properly understood.
By way of contrast, larger tests are not exhaustive, and so can easily miss lots of bugs. They're bad at localizing bugs. They are usually performed at fairly long intervals, so they may detect a bug some time after it's made. They define parts of the total user experience, but provide no basis to reason about any part of the system. They should not be neglected, but they are not a substitute for unit tests.
As others have stated, the length of the feedback loop and isolation of the problem to a specific component are key benefits of Unit Tests.
Another way that they are complementary to functional tests is how coverage is tracked in some organizations:
Unit tests on code coverage
Functional tests on requirements coverage
Functional tests might miss features that were implemented but are not in the spec.
Being based on the code, Unit tests might miss that a certain feature wasn't implemented, which is where requirements based coverage analysis of Functional testing comes in.
A final point : there are some things that are easier/faster to test at the unit level, especially around error scenarios.
Unit testing will help you identify the source of your bug more clearly and let you know that you have a problem earlier. Both are good to have, but they are different, and unit testing does have benefits.
The software you test is a system. When you are testing it as a whole you are black box testing since you primarily deal with inputs and outputs. Black box testing is great when you have no means of getting inside of the system.
But since you usually do, you create a lot of unit tests that actually test your system as a white box. You can slice system open in many ways and organize your tests depending on system internal structure. White box testing provides you with many more ways of testing and analyzing systems. It's clearly complimentary to Black box testing and should not be considered as an alternative or competing methodology.