Beginning TDD - Challenges? Solutions? Recommendations? [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
OK, I know there have already been questions about getting started with TDD.. However, I guess I kind of know the general concensus is to just do it , However, I seem to have the following problems getting my head into the game:
When working with collections, do will still test for obvious add/remove/inserts successful, even when based on Generics etc where we kind of "know" its going to work?
Some tests seem to take forever to implement.. Such as when working with string output, is there a "better" way to go about this sort of thing? (e.g. test the object model before parsing, break parsing down into small ops and test there) In my mind you should always test the "end result" but that can vary wildly and be tedious to set up.
I don't have a testing framework to use (work wont pay for one) so I can "practice" more. Are there any good ones that are free for commercial use? (at the moment I am using good 'ol Debug.Assert :)
Probably the biggest.. Sometimes I don't know what to expect NOT to happen.. I mean, you get your green light but I am always concerned that I may be missing a test.. Do you dig deeper to try and break the code, or leave it be and wait for it all fall over later (which will cost more)..
So basically what I am looking for here is not a " just do it " but more " I did this, had problems with this, solved them by this ".. The personal experience :)

First, it is alright and normal to feel frustrated when you first start trying to use TDD in your coding style. Just don't get discouraged and quit, you will need to give it some time. It is a major paradigm shift in how we think about solving a problem in code. I like to think of it like when we switched from procedural to object oriented programming.
Secondly, I feel that test driven development is first and foremost a design activity that is used to flesh out the design of a component by creating a test that first describes the API it is going to expose and how you are going to consume it's functionality. The test will help shape and mold the System Under Test until you have been able to encapsulate enough functionality to satisfy whatever tasks you happen to be working on.
Taking the above paragraph in mind, let's look at your questions:
If I am using a collection in my system under test, then I will setup an expectation to make sure that the code was called to insert the item and then assert the count of the collection. I don't necessarily test the Add method on my internal list. I just make sure it was called when the method that adds the item is called. I do this by adding a mocking framework into the mix, with my testing framework.
Testing strings as output can be tedious. You cannot account for every outcome. You can only test what you expect based on the functionality of the system under test. You should always break your tests down to the smallest element that it is testing. Which means you will have a lot of tests, but tests that are small and fast and only test what they should, nothing else.
There are a lot of open source testing frameworks to choose from. I am not going to argue which is best. Just find one you like and start using it.
MbUnit
nUnit
xUnit
All you can do is setup your tests to account for what you want to happen. If a scenario comes up that introduces a bug in your functionality, at least you have a test around the functionality to add that scenario into the test and then change your functionality until the test passes. One way to find where we may have missed a test is to use code coverage.
I introduced you to the mocking term in the answer for question one. When you introduce mocking into your arsenal for TDD, it dramatically makes testing easier to abstract away the parts that are not part of the system under test. Here are some resources on the mocking frameworks out there are:
Moq: Open Source
RhinoMocks: Open Source
TypeMock: Commercial Product
NSubstitute: Open Source
One way to help in using TDD, besides reading about the process, is to watch people do it. I recommend in watching the screen casts by JP Boodhoo on DNRTV. Check these out:
Jean Paul Boodhoo on Test Driven Development Part 1
Jean Paul Boodhoo on Test Driven Development Part 2
Jean Paul Boodhoo on Demystifying Design Patterns Part 1
Jean Paul Boodhoo on Demystifying Design Patterns Part 2
Jean Paul Boodhoo on Demystifying Design Patterns Part 3
Jean Paul Boodhoo on Demystifying Design Patterns Part 4
Jean Paul Boodhoo on Demystifying Design Patterns Part 5
OK, these will help you see how the terms I introduced are used. It will also introduce another tool called Resharper and how it can facilitate the TDD process. I couldn't recommend this tool enough when doing TDD. Seems like you are learning the process and you are just finding some of the problems that have already been solved with using other tools.
I think I would be doing an injustice to the community, if I didn't update this by adding Kent Beck's new series on Test Driven Development on Pragmatic Programmer.

From my own experience:
Only test your own code, not the underlying framework's code. So if you're using a generic list then there's no need to test Add, Remove etc.
There is no 2. Look over there! Monkeys!!!
NUnit is the way to go.
You definitely can't test every outcome. I test for what I expect to happen, and then test a few edge cases where I expect to get exceptions or invalid responses. If a bug comes up down the track because of something you forgot to test, the first thing you should do (before trying to fix the bug) is write a test to prove that the bug exists.

My take on this is following:
+1 for not testing framework code, but you may still need to test classes derived from framework classes.
If some class/method is cumbersome to test it may be strong indication that something is wrong with desing. I try to follow "1 class - 1 responsibility, 1 method - 1 action" principle. That way you will be able to test complex methods much easier by doing that in smaller portions.
+1 for xUnit. For Java you may also consider TestNG.
TDD is not single event it is a process. So do not try to envision everything from the beginning, but make sure that every bug found in code is actually covered by test once discovered.

I think the most important thing with (and actually one of the great outcomes of, in a somewhat recursive manner) TDD is successful management of dependencies. You have to make sure that modules are tested in isolation with no elaborate setup needed. For example, if you're testing a component that eventually sends an email, make the email sender a dependency so that you can mock it in your tests.
This leads to a second point - mocks are your friends. Get familiarized with mocking frameworks and the style of tests they promote (behavioral, as opposed to the classic state based), and the design choices they encourage (The "Tell, don't ask" principle).

I found that the principles illustrated in the Three Index Cards to Easily Remember the Essence of TDD is a good guide.
Anyway, to answer your questions
You don't have to test something you "know" is going to work, unless you wrote it. You didn't write generics, Microsoft did ;)
If you need to do so much for your test, maybe your object/method is doing too much as well.
Download TestDriven.NET to immediately start unit testing on your Visual Studio, (except if it's an Express edition)
Just test the correct thing that will happen. You don't need to test everything that can go wrong: you have to wait for your tests to fail for that.
Seriously, just do it, dude. :)

I am no expert at TDD, by any means, but here is my view:
If it is completely trivial (getters/setters etc) do not test it, unless you don't have confidence in the code for some reason.
If it is a quite simple, but non-trivial method, test it. The test is probably easy to write anyway.
When it comes to what to expect not to happen, I would say that if a certain potential problem is the responsibility of the class you are testing, you need to test that it handles it correctly. If it is not the current class' responsibility, don't test it.
The xUnit testing frameworks are often free to use, so if you are a .Net guy, check out NUnit, and if Java is your thing check out JUnit.

The above advice is good, and if you want a list of free frameworks you have to look no farther than the xUnit Frameworks List on Wikipedia. Hope this helps :)

In my opinion (your mileage may vary):
1- If you didn't write it don't test it. If you wrote it and you don't have a test for it it doesn't exist.
3- As everyone's said, xUnit's free and great.
2 & 4- Deciding exactly what to test is one of those things you can debate about with yourself forever. I try to draw this line using the principles of design by contract. Check out 'Object Oriented Software Construction" or "The Pragmatic Programmer" for details on it.

Keep tests short, "atomic". Test the smallest assumption in each test. Make each TestMethod independent, for integration tests I even create a new database for each method. If you need to build some data for each test use an "Init" method. Use mocks to isolate the class your testing from it's dependencies.
I always think "what's the minimum amount of code I need to write to prove this works for all cases ?"

Over the last year I have become more and more convinced of the benefits of TDD.
The things that I have learned along the way:
1) dependency injection is your friend. I'm not talking about inversion of control containers and frameworks to assemble plugin architectures, just passing dependencies into the constructor of the object under test. This pays back huge dividends in the testability of your code.
2) I set out with the passion / zealotry of the convert and grabbed a mocking framework and set about using mocks for everything I could. This led to brittle tests that required lots of painful set up and would fall over as soon as I started any refactoring. Use the correct kind of test double. Fakes where you just need to honour an interface, stubs to feed data back to the object under test, mock only where you care about interaction.
3) Test should be small. Aim for one assertion or interaction being tested in each test. I try to do this and mostly I'm there. This is about robustness of test code and also about the amount of complexity in a test when you need to revisit it later.
The biggest problem I have had with TDD has been working with a specification from a standards body and a third party implementation of that standard that was the de-facto standard. I coded lots of really nice unit tests to the letter of the specification only to find that the implementation on the other side of the fence saw the standard as more of an advisory document. They played quite loose with it. The only way to fix this was to test with the implementation as well as the unit tests and refactor the tests and code as necessary. The real problem was the belief on my part that as long as I had code and unit tests all was good. Not so. You need to be building actual outputs and performing functional testing at the same time as you are unit testing. Small pieces of benefit all the way through the process - into users or stakeholders hands.

Just as an addition to this, I thought I would say I have put a blog post up on my thoughts on getting started with testing (following this discussion and my own research), since it may be useful to people viewing this thread.
"TDD – Getting Started with Test-Driven Development" - I have got some great feedback so far and would really appreciate any more that you guys have to offer.
I hope this helps! :)

Related

In TDD, should tests be written by the person who implemented the feature under test? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We run a project in which we want to solve with test driven development. I thought about some questions that came up when initiating the project. One question was: Who should write the unit-test for a feature? Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs?
If I understand the concept of TDD in the right way, the feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer?
What would you say? Should the tests in TDD be written by the programmer himself or should another programmer write the tests that describes what a method can do?
In TDD the developer first writes the unit tests that fail and then fixes the production code to make the test pass. The idea is that the changes are made in really small steps - so you write a test that calls a method that doesn't exist, then you fix the test by adding an empty method, then you add some assertion to the test about the method so it fails again, then you implement the first cut of the method, etc. Because these steps are so small it is not practical to have a separate person write the tests. On the other hand I would recommend pairing, so that you gain some additional eyeballs making sure the code makes sense.
I think it would be possible to have another person/team/or even client (when you use tools like Fitness) to write acceptance tests, that test the whole functionality on a higher level.
One of the benefits of TDD is the fast feedback cycle. Having another developer write the tests would slow the process down too much. The same developer should write both.
It could be done both ways, you could write the unit test yourself, or go for the ping pong approach where you take turns with another developer writing unit tests and writing the implementation if you are pairing. The right solution is the one that works for you and your team. I prefer to write the test myself, but I know others that have had luck with the ping pong approach as well.
Unit Tests and Acceptance Tests are two different things, both of which can (and should) be done in TDD. Unit Tests are written from the standpoint of the developer, to make sure that the code is doing what she expects it to. Acceptance Tests are written from the standpoint of the customer, to make sure the code fulfills the appropriate need. It can make a lot of sense for the Acceptance Tests to be written by someone else (usually because it requires a slightly different mindset and domain knowledge, and because they can be done in parallel) but Unit Tests should be written by the developer.
TDD also says that you shouldn't write any code except in response to a failing test, so having to wait for someone else to write the Unit Tests seems pretty inefficient.
The Unit Test should be written prior to coding and test that a Unit meets the requirements, therefore it should be fine for the developer implementing the code to also write the Unit Test.
I think you need to separate Automated Unit Testing from Test Driven Development.
(IMHO it's not just you who should make a vital distinction here).
AUT strongly recommends, TDD requires test to be written first.
TDD furthermore makes the test an essential part of the writing code process. TDD is not so much a method of quality assurance, but a way to think about code - so separate responsibilities would be against the philosophy of TDD. They'd also be impractical - the new test / new code cycles are very small, usually a matter of minutes. In my understanding, Test Driven Design would be a better description.
AUT can be fitted on an existing code base (although often badly, depending on size and structure of the code base). Separate responsibilities might have some advantages here. Still, AUT puts some pressure on design - so the separation would be at the who types the code level.
Distinction: I freely admit that I don't like the idea of TDD. It might work well for a certain type of coder, for certain applications, in certain markets - but all examples, demos and walkthroughs I've seen up to now make me shudder. OTOH, I consider AUT a valuable tool for quality assurance. One valuable tool.
I'm a little confused here.
You say that you want to use TDD and you do seem to understand it correctly that a programmer writes a test, then the same programmer writes the implementation and does it in the next few seconds/minutes after writing the test. That is part of the definition of TDD. (btw 'the same programmer' also means 'the other programmer in the pair' when practising pair programming).
If you want to do something different, then go for it and write up your experiences in a blog or article.
What you shouldn't do is to say that what you do different is TDD.
The reason for 'the same programmer' writing the implementation, and writing it very soon after the test is for the purposes of rapid feedback, to discover how to write good tests, how to design software well and how to write good implementations.
Please see The Three Rules Of Tdd.
Per Justin's response, not only is it fine for the implementing developer to write the test, it's the de facto standard. It is, theoretically, also acceptable for another programmer to write the test. I have toyed with the idea of a "test" programmer supporting a "feature" developer, but I have not encountered examples.
If I write a test for an object, in addition to the inputs and outputs I expect, I have to know the interface it exposes. In other words, the classes and methods under test must be decided upon before development begins. In twelve years I have only once worked in a shop that achieved that granularity of design before development began. I am not sure what your experiences have been, but it doesn't seem very Agile to me.

TDD vs. Unit testing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s).
There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits?
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written?
EDIT
What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input.
FOLLOW UP
We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!
If the team is floundering at implementing TDD, but they weren't creating any Unit Tests before...then start them off by creating Unit Tests after their code is written. Even Unit tests written after the code are better than no Unit Tests at all!
Once they're proficient at Unit Testing (and everything that comes with it), then you can work on getting them to create the tests first...and code second.
It is absolutely still worth writing the unit tests after code is written. It's just that sometimes it's often harder because your code wasn't designed to be testable, and you may have overcomplicated it.
I think a good pragmatic way to bring a team into TDD is to provide the alternative method of "test-during development" in the transition period, or possibly in the long-term. They should be encouraged to TDD sections of code that seem natural to them. However, in sections of code that seem difficult to approach test-first or when using objects that are predetermined by a non-agile A&D process, developers can be given the option of writing a small section of the code, then writing tests to cover that code, and repeating this process. Writing unit tests for some code immediately after writing that code is better than not writing any unit tests at all.
It's in my humble opinion better to have 50% test coverage with "code first, test after" and a 100% completed library, than 100% test coverage and a 50% completed library with TDD. After a while, your fellow developers will hopefully find it entertaining and educational to write tests for all of the public code they write, so TDD will sneak its way into their development routine.
TDD is about design! So if you use it, you will be sure to have a testable design of your code, making it easier to write your tests. If you write tests after the code is written they are still valuable but IMHO you will be wasting time since you will probably not have a testable design.
One suggestion I can give to you to try to convince your team to adopt TDD is using some of the techniques described in Fearless Change: Patterns for Introducing New Ideas, by Mary Lynn Manns and Linda Rising.
I just read this on a calendar: "Every rule, executed to its utmost, becomes ridiculous or even dangerous." So my suggestion is not to be religious about it. Every member of your team must find a balance between what they feel "right" when it comes to testing. This way, every member of your team will be most productive (instead of, say, thinking "why do I have to write this sti**** test??").
So some tests are better than none, tests after the code are better than few tests and testing before the code is better than after. But each step has its own merits and you shouldn't frown upon even small steps.
If they're new to testing than IMO start off testing code that's already been written and slowly graduate to writing tests first. As someone trying to learn TDD and new to unit testing, I've found it kind of hard to do a complete 180 and change my mindset to write tests before code, so the approach I'm taking is sort of a 50-50 mix; when I know exactly what the code will look like, I'll write the code and then write a test to verify it. For situations where I'm not entirely sure then I'll start with a test and work my way backwards.
Also remember that there is nothing wrong with writing tests to verify code, instead of writing code to satisfy tests. If your team doesn't want to go the TDD route then don't force it on them.
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
There is absolutely no doubt about the fact that there is value in unit tested code (regardless of when tests were written) and I include "the code is unit tested" in the "Definition of Done". People may use TDD or not, as long as they test.
Regarding version control, I like to use "development branches" with a unit tested policy (i.e. the code compiles and builds, all unit tests pass). When features are done, they are published from development branches to the trunk. In other words, the trunk branch is the "Done branch" (No junk on the trunk!) and has a shippable policy (can release at any time) which is more strict and includes more things than "unit tested".
This is something that your team will have to have its own successes with before they begin to believe it in. I'll rant about my nUnit epiphany for anyone who cares:
About 5 years ago I discovered nUnit when working on a project. We had almost completed V1.0 and I created a few tests just to try out this new tool. We had a lot of bugs (obviously!) because we were a new team, on a tight deadline, high expectations (sound familiar?) etc. Anyway we got 1.0 in and started on 1.1. We re-orged the team a little bit and I got 2 devs assigned to me. I did a 1-hour demo for them and told them that everything we wrote had to have a test case with it. We constantly ran "behind" the rest of the team during the 1.1 dev cycle because we were writing more code, the unit tests. We ended up working more but here's the payoff -- when we finally got into testing we had exactly 0 bugs in our code. We helped everyone else debug and repair their bugs. In the postmortem, when the bug counts showed up, it got EVERYONE's attention.
I'm not dumb enough to think you can test your way to success but I am a true believer when it comes to unit tests. The project adopted nUnit and it soon spread to the company for all .Net projects as a result of 1 success. Total time period for our V1.1 release was 9 dev weeks so it was definitely NOT an overnight success. But long term, it proved successful for our project and the company we built solutions for.
There is no doubt that testing (First, While or even After) will save your bacon, and improve your productivity and confidence. I recommend adopting it!
I was in a similar situation, because I was a "noob" developer, I was often frustrated when working on team project by the fact that a contribution had broken the build. I did not know if I was to blame or even in some cases, who to blame. But I was more concerned that I was doing to same thing to my fellow developers. This realisation then motivated to adopt some TDD strategies. Our team started have silly games, and rules, like you cannot go home till all your tests pass, or if you submit something without a test, then you have to buy everyone "beer/lunch/etc" and it made TDD more fun.
One of the most useful aspect of unit testing is ensuring the continuing correctness of already working code. When you can refactor at will, let an IDE remind you of compile time errors, and then click a button to let your tests spot any potential runtime errors--sometimes arriving in previously trivial blocks of code, then I think you will find your team starting to appreciate TDD. So starting with testing existing code is definitely useful.
Also, to be blunt, I have learned more about how to write testable code by trying to test written code than from starting with TDD. It can be too abstract at first if you are trying to think of contracts that will both accomplish the end goal and allow testing. But when you look at code and can say "This singleton here completely spoils dependency injection and makes testing this impossible," you start to develop an appreciation for what patterns make your testing life easier.
Well, if you do not write tests firsts it's not "Test Driven", it's just testing. It has benefits in itself and if you allready have a code base adding tests for it is certainly usefull even if it's not TDD but merely testing.
Writing tests first is about focusing on what the code should do before writing it. Yes you also get a test doing that and it's good, but some may argue it's not even the most important point.
What I would do is train the team on toy projects like these (see Coding Dojo, Katas) using TDD (if you can get experienced TDD programmers to participate in such workshop it would be even better). When they'll see the benefits they will use TDD for the real project. But meanwhile do not force them, it they do not see the benefit they won't do it right.
If you have design sessions before writing code or have to produce a design doc, then you could add Unit Tests as the tangible outcome of a session.
This could then serve as a specification as to how your code should work. Encourage pairing on the design session, to get people talking about how something should work and what it should do in given scenarios. What are the edge cases, with explicit test cases for them so everyone knows what it's going to do if given a null argument for example.
An aside but BDD also may be of interest
You may find some traction by showing an example or two where TDD results in less code being written - because you only write code required to make the test pass, the temptation to gold-plate or engage in YAGNI is easier to resist. Code you don't write doesn't need to be maintained, refactored, etc, so it's a "real savings" that can help sell the concept of TDD.
If you can clearly demonstrate the value in terms of time, cost, code and bugs saved, you may find it's an easier sell.
Starting to build JUnit test classes is the way to start, for existing code it's the only way to start. In my experience it is very usefull to create test classes for existing code. If management thinks this will invest too much time, you can propose to only write test classes when the corresponding class is found to contain a bug, or is in need of cleanup.
For the maintenance process the approach to get the team over the line would be to write JUnit tests to reproduce bugs before you fix them, i.e.
bug is reported
create JUnit test class if needed
add a test that reproduces the bug
fix your code
run the test to show the current code does not reproduce the bug
You can explain that by "documenting" bugs in this way will prevent those bugs from creeping back in later. That is a benefit the team can experience immediately.
I have done this in many organizations and I have found the single best way to get TDD started and followed is to set up pair programming. If you have someone else you can count on that knows TDD then the two of you can split up and pair with other developers to actually do some paired programming using TDD. If not I would train someone who will help you to do this before presenting it to the rest of the team.
One of the major hurdles with unit testing and especially TDD is that developers don't know how to do it, so they can not see how it can be worth their time. Also when you first start out, it is much slower and doesn't seem to provide benefits. It is only really providing you benefits when you are good at it. By setting up paired programming sessions you can quickly get developers to be able to learn it quickly and get good at it quicker. Additionally they will be able to see immediate benefits from it as you work though it together.
This approach has worked many times for me in the past.
One powerful way to discover the benefits of TDD is to do a significant rewrite of some existing functionality, perhaps for performance reasons. By creating a suite of tests that do a good job covering all the functionality of the existing code, this then gives you the confidence to refactor to your heart's content with full confidence that your changes are safe.
Note that in this case I'm talking about testing the design or contract - unit tests that test implementation details will not be suitable here. But then again, TDD can't test implementation by definition, as they are supposed to be written before the implementation.
TDD is a tool that developers can use to produce better code. I happen to feel that the exercise of writing testable code is as least as valuable as the tests themselves. Isolating the IUT (Implementation Under Test) for testing purposes has the side affect of decoupling your code.
TDD isn't for everyone, and there's no magic that will get a team to choose to do it. The risk is that unit test writers that don't know what's worth testing will write a lot of low value tests, which will be cannon fodder for the TDD skeptics in your organization.
I usually make automated Acceptance Tests non-negotiable, but allow developers to adopt TDD as it suits them. I have my experienced TDDers train/mentor the rest and "prove" the usefulness by example over a period of many months.
This is as much a social/cultural change as it is a technical one.

How much testing is enough? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I recently spent about 70% of the time coding a feature writing integration tests. At one point, I was thinking “Damn, all this hard work testing it, I know I don’t have bugs here, why do I work so hard on this? Let’s just skim on the tests and finish it already…”
Five minutes later a test fails. Detailed inspection shows it’s an important, unknown bug in a 3rd party library we’re using.
So … where do you draw your line on what to test on what to take on faith? Do you test everything, or the code where you expect most of the bugs?
In my opinion, it's important to be pragmatic when it comes to testing. Prioritize your testing efforts on the things that are most likely to fail, and/or the things that it is most important that do not fail (i.e. take probability and consequence into consideration).
Think, instead of blindly following one metric such as code coverage.
Stop when you are comfortable with the test suite and your code. Go back and add more tests when (if?) things start failing.
When you're no longer afraid to make medium to major changes in your code, then chances are you've got enough tests.
Good question!
Firstly - it sounds like your extensive integration testing paid off :)
From my personal experience:
If its a "green fields" new project,
I like to enforce strict unit testing
and have a thorough (as thorough as
possible) integration test plan
designed.
If its an existing piece of software
that has poor test coverage, then I
prefer to design a set integration
tests that test specific/known
functionality. I then introduce
tests (unit/integration) as I
progress further with the code base.
How much is enough? Tough question - I dont think that there ever can be enough!
"Too much of everything is just enough."
I don't follow strict TDD practices. I try to write enough unit tests to cover all code paths and exercise any edge cases I think are important. Basically I try to anticipate what might go wrong. I also try to match the amount of test code I write to how brittle or important I think the code under test is.
I am strict in one area: if a bug is found, I first write a test that exercises the bug and fails, make the code changes, and verify that the test passes.
Gerald Weinberg's classic book "The Psychology of Computer Programming" has lots of good stories about testing. One I especially like is in Chapter 4 "Programming as a Social Activity" "Bill" asks a co-worker to review his code and they find seventeen bugs in only thirteen statements. Code reviews provide additional eyes to help find bugs, the more eyes you use the better chance you have of finding ever-so-subtle bugs. Like Linus said, "Given enough eyeballs, all bugs are shallow" your tests are basically robotic eyes who will look over your code as many times as you want at any hour of day or night and let you know if everything is still kosher.
How many tests are enough does depend on whether you are developing from scratch or maintaining an existing system.
When starting from scratch, you don't want to spend all your time writing test and end up failing to deliver because the 10% of the features you were able to code are exhaustively tested. There will be some amount of prioritization to do. One example is private methods. Since private methods must be used by the code which is visible in some form (public/package/protected) private methods can be considered to be covered under the tests for the more-visible methods. This is where you need to include some white-box tests if there are some important or obscure behaviors or edge cases in the private code.
Tests should help you make sure you 1) understand the requirements, 2) adhere to good design practices by coding for testability, and 3) know when previously existing code stops working. If you can't describe a test for some feature, I would be willing to bet that you don't understand the feature well enough to code it cleanly. Using unit test code forces you to do things like pass in as arguments those important things like database connections or instance factories instead of giving in to the temptation of letting the class do way too much by itself and turning into a 'God' object. Letting your code be your canary means that you are free to write more code. When a previously passing test fails it means one of two things, either the code no longer does what was expected or that the requirements for the feature have changed and the test simply needs to be updated to fit the new requirements.
When working with existing code, you should be able to show that all the known scenarios are covered so that when the next change request or bug fix comes along, you will be free to dig into whatever module you see fit without the nagging worry, "what if I break something" which leads to spending more time testing even small fixes then it took to actually change the code.
So, we can't give you a hard and fast number of tests but you should shoot for a level of coverage which increases your confidence in your ability to keep making changes or adding features, otherwise you've probably reached the point of diminished returns.
If you or your team has been tracking metrics, you could see how many bugs are found for every test as the software life-cycle progresses. If you've defined an acceptable threshold where the time spent testing does not justify the number of bugs found, then THAT is the point at which you should stop.
You will probably never find 100% of your bugs.
I spend a lot of time on unit tests, but very little on integration tests. Unit tests allow me to build out a feature in a structure way. And now you have some nice documentation and regression tests that can be run every build
Integration tests are a different matter. They are difficult to maintain and by definition integrate a lot of different pieces of functionality, often with infrastructure that is difficult to work with.
As with everything in life it is limited by time and resources and relative to its importance. Ideally you would test everything that you reasonably think could break. Of course you can be wrong in your estimate, but overtesting to ensure that your assumptions are right depends on how significant a bug would be vs. the need to move on to the next feature/release/project.
Note: My answer primarily address integration testing. TDD is very different. It was covered on SO before, and there you stop testing when you have no more functionality to add. TDD is about design, not bug discovery.
I prefer to unit test as much as possible. One of the greatest side-effects (other than increasing the quality of your code and helping keep some bugs away) is that, in my opinion, high unit test expectations require one to change the way they write code for the better. At least, that's how it worked out for me.
My classes are more cohesive, easier to read, and much more flexible because they're designed to be functional and testable.
That said, I default unit test coverage requirements of 90% (line and branch) using junit and cobertura (for Java). When I feel that these requirements cannot be met due to the nature of a specific class (or bugs in cobertura) then I make exceptions.
Unit tests start with coverage, and really work for you when you've used them to test boundary conditions realistically. For advice on how to implement that goal, the other answers all have it right.
This article gives some very interesting insights on the effectiveness of user testing with different numbers of users. It suggests that you can find about two thirds of your errors with only three users testing the application, and as much as 85% of your errors with just five users.
Unit testing is harder to put a discrete value on. One suggestion to keep in mind is that unit testing can help to organize your thoughts on how to develop the code you're testing. Once you've written the requirements for a piece of code and have a way to check it reliably, you can write it more quickly and reliably.
I test Everything. I hate it, but it's an important part of my work.
I worked in QA for 1.5 years before becoming a developer.
You can never test everything (I was told when trained all the permutations of a single text box would take longer than the known universe).
As a developer it's not your responsibility to know or state the priorities of what is important to test and what not to test. Testing and quality of the final product is a responsibility, but only the client can meaningfully state the priorities of features, unless they have explicitly given this responsibility to you. If there isn't a QA team and you don't know, ask the project manager to find out and prioritise.
Testing is a risk reduction exercise and the client/user will know what is important and what isn't. Using a test first driven development from Extreme Programming will be helpful, so you have a good test base and can regression test after a change.
It's important to note that due to natural selection code can become "immune" to tests. Code Complete says when fixing a defect to write a test case for it and look for similar defects, it's also a good idea to write a test case for defects similar to it.

Why should I use Test Driven Development? [duplicate]

This question already has answers here:
Closed 13 years ago.
Duplicate:
Why should I practice Test Driven Development and how should I start?
For a developer that doesn't know about Test-Driven Development, what problem(s) will be solved by adopting TDD?
[EDIT] Let's assume that the developer already (ab)uses a unit testing framework.
Here are three reasons that TDD might help a developer/team:
Better understanding of what you're going to write
Enforces the policy of writing tests a little better
Speeds up development
One reason to write the tests first is to have a better understanding of the actual code before you write it. To me, this is the main benefit of test driven development. When you write the test cases first, you think more critically about the corner cases. It's then easier to address them when you write the code and ensure that they're accurate.
Another reason is to actually enforce writing the tests. Often when people do unit-testing without the TDD, they have a testing framework set up, write some new code, and then quit. They think that the code already works just fine, so why write tests? It's simple enough that it won't break, right? But now you've lost the advantages of doing unit-tests in the first place (completely different discussion). Write them first, and they're already there.
Writing these tests first could mean that you don't need to launch the program in a debugging environment (slow — especially for larger projects) to test if a few small things work. Of course there's no excuse for not doing so before committing changes.
Convincing yourself or other people to write the tests first may be difficult. You may have better luck getting them to write both at the same time which may be just as beneficial.
Presumably you test code that you've written before you commit it to a repository.
If that's not true you have other issues to deal with.
If it is true, you can look at writing tests using a framework as a way to automate those main routines or drivers that you currently write so you can run all of them automatically at the push of a button. You don't have to pore over output to decide if the test passed or failed; you embed the success or failure of the test in the code and get a thumbs up or down decision right away. Running all the tests at once reduces the chances of a "whack a mole" situation where you fix something in one class and break something else. All the tests have to pass.
Sounds good so far, yes?
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
If you're already using a test framework, getting good value out of the tests you write, and have meaningful code coverage up around 70%, then I think you're doing well. I'm not sure that TDD will give you much more value. It's up to you to decide whether or not you go that extra mile. Personally, I don't do it. I write tests after the class and refactor if I feel the need. Some people might find it helpful to write the test first knowing it'll fail, but I don't.
(This is more of a comment agreeing with duffymo's answer than an answer of its own.)
duffymo answers:
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
I think it's actually to force coders to think about what their code is doing. Having to think about a test makes one consider what the code is supposed to do: what the pre-conditions and post-conditions are, which functions are primitive and which are composed of primitive functions, what the minimal necessary public interface is, and what's an implementation detail.
These are all things I routinely think about, so like you, "test first" doesn't add a whole lot, for me. And frankly (I know this is heresy in some circles) I like to "anchor" the core ideas of a class by sketching out the public interface first; that way I can look at it, mentally use it, and see if it's as clean as I thought it was. (A class or a library should be easy and intuitive for client programmers to use.)
In other words, I do what TDD tries to ensure happens by writing tests first, but like duffymo, I get there a different way.
And the real point of "test first" is to get a coder to pause and think like a designer. It's silly to make a fetish of how the programmer enters that state; for those who don't do it naturally, "test first" serves as a ritual to get them there. For those who do, "test first" doesn't add much -- and can get in the way of the programmer's habitual way of getting into that state.
Again, we want to look at results, not rituals. If a junior guy needs a ritual, a "stations of the cross" or a rosary* to "get in the groove", "test first" serves that purpose. If someone has their own way to get there, that's great too.
Note that I'm not saying that code shouldn't be tested. It should. It gives us a safety net, which in turn allows us to concentrate our attention on writing good code, even audacious code, because we know the net is there to catch errors.
All I am saying is that fetishistic insistence on "test first" confuses the method (one of many) with the goal, making the programmer think about what he's coding.
* To be ecumenical, I'll note that both Catholics and Muslims use rosaries. And again, it's a mechanical, muscle-memory way to put oneself into a certain frame of mind. It's a fetish (in the original sense of a magic object, not the "sexual fetish" meaning) or good-luck charm. So is saying "Om mani padme hum", or sitting zazen, or stroking a "lucky" rabbit's foot, (Not so lucky for the rabbit.) The philosopher Jerry Fodor, when thinking about hard problems, has a similar ritual: he repeats to himself, "C'mon, Jerry, you can do it!" (I tried that too, but since my name is not Jerry, it didn't work for me. ;) )
Ideally:
You won't waste time writing features you don't need. You'll have a comprehensive unit test suite to serve as a safety net for refactoring. You'll have executable examples of how your code is intended to be used. Your development flow will be smoother and faster; you'll spend less time in the debugger.
But most of all, your design will be better. Your code will be better factored - loosely coupled, highly cohesive - and better formed - smaller, better-named methods & classes.
For my current project (which runs on a relatively heavyweight process), I have adopted a peculiar form of TDD that consists of writing skeleton test cases based on requirements documents and GUI mockups. I write dozens, sometimes hundreds of those before starting to implement anything (this runs totally against "pure" TDD which says you should write a few tests, then immediately start on a skeleton implementation).
I have found this to be an excellent way to review the requirements documents. I have to think about the behaviour described in them much more intensively than if I just were to read them . In consequence, I find many more inconsistencies and gaps in them which I would otherwise only have found during implementation. This way, I can ask for clarification earlier and have better requirements when I start implementing.
Then, during implementation, the tests are a way to measure how far I've yet to go. And they prevent me from forgetting anything (don't laugh, that's a real problem when you work on larger use cases).
And the moral is: even when your dev process doesn't really support TDD, it can still be done in a way, and improve quality and productivity.
I personally do not use TDD, but one of the biggest pro's I can see with the methology is that customer satisfaction ensurance. Basically, the idea is that the steps of your development process are these:
1) Talk to customer about what the application is supposed to do, and how it is supposed to react to different situations.
2) Translate the outcome of 1) into Unit Tests, which each test one feature or scenario.
3) Write simple, "sloppy" code that (barely) passes the tests. When this is done, you have met your customer's expectations.
4) Refactor the code you wrote in 3) until you think you've done it in the most effective way possible.
When this is done you have hopefully produced high-quality code, that meets your customer's needs. If the customer now wants a new feature, you start the cycle over - discuss the feature, write a test that makes sure it works, write code that passes the test, refactor.
And as others have said, each time you run your tests you ensure that the old code still works, and that you can add new functionality without breaking old one.
Most of the people I have talked to don't use a complete TDD model. They usually find the best testing model that works for them. Find yours play with TDD and find where you are the most productive.
TDD (Test Driven Development/ Design) provides the following advantages
ensures you know the story card's acceptance criteria before you start
ensures that you know when to stop coding (i.e., when the acceptance criteria has been meet thus prevents gold platting)
As a result you end up with code that is
testable
clean design
able to be refactored with confidence
the minimal code necessary to satisfy the story card
a living specification of how the code works
able to support a sustainable pace of new features
I made a big effort to learn TDD for Ruby on Rails development. It took several days before I really got into it and it. I was very skeptical but I made the effort because programmers I respect support it.
At this point I feel it was definitely worth the effort. There are several benefits which I'm sure others will be happy to list for you. To me the most important advantage is that it helps avoid that nightmare situation late in a project where something suddenly breaks for no apparent reason and then you're spending a day and a half with the debugger. It helps prevent your code base from deteriorating as you add more and more logic to it.
It is common knowledge that writing tests and having a large number of automated tests are a Good Thing.
However, without TDD, it often just becomes tedious. People write tests, and then leave it, and the tests do not get updated as they should, nor do new features get tested as often as they should either.
A big part of this is because the code has become a pain to test - TDD will influence your design so that it is much easier to test. Because you've used TDD, you have a good number of tests, which makes it much easier to find regressions whenever your code or requirements change, simplifying debugging drammatically, causing an appreciation of good TDD and encouraging more tests to be written when changes are needed - and we're back to the start of the cycle.
There are many advantages:
Higher code quality
Fewer bugs
Less wasted time
Any of those alone would be sufficient justification to implement TDD.

How do you persuade others to write unit tests? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been test-infected for a long time now, but it would seem the majority of developers I work with either have never tried it or dismiss it for one reason or another, with arguments typically being that it adds overhead to development or they don't need to bother.
What bothers me most about this is that when I come along to make changes to their code, I have a hard time getting it under test as I have to apply refactorings to make it testable and sometimes end up having to do a lot of work just so that I can test the code I'm about to write.
What I want to know is, what arguments would you use to persuade other developers to start writing unit tests? Most developers I've introduced to it take to it quite well, see the benefits and continue to use it. This always seems to be the good developers though, who are already interested in improving the quality of their code and hence can see how unit testing does this.
How do persuade the rest of the motley crew? I'm not looking for a list of testing benefits as I already know what these are, but what techniques you have used or would use to get other people on board. Tips on how to persuade management to take an active role are appreciated as well
There's more than one side to that question, I guess. I find that actually convincing developers to starting using tests is not that hard, because the list of advantages of using testing often speaks for itself. When that said, it is quite a barrier to actually get going and I find that the learning curve often is a bit steep – especially for novice coders. Throwing testing frameworks, TDD test-first mentality, and mocking framework at someone who's not yet comfortable with neither C#, .Net or programming in general, could be just too much to handle.
I work as a consultant and therefore I often have to address the problem of implementing TDD in an organization. Luckily enough, when companies hire me it is often because of my expertise in certain areas, and therefore I might have a little advantage when it comes to getting people’s attention. Or maybe it's just that it's a bit easier to for me as an outsider to come in to a new team and say "Hi! I've tried TDD on other projects and I know that it works!" Or maybe it's my persuasiveness/stubbornness? :) Either way, I often don't find it very hard to convince devs to start writing tests. What I find hard though, is to teach them how to write good unit tests. And as you point out in your question; to stay on the righteous path.
But I have found one method that I think works pretty well when it comes to teaching unit testing. I've blogged about it here, but the essence is to sit down and do some pair-programming. And doing the pair programming I start out writing the unit test first. This way I show them a bit how the testing framework work, how I structure the tests and often some use of mocking. Unit tests should be simple, so all in all the test should be fairly easy to understand even for junior devs. The worst part to explain is often the mocking, but using easy-to-read mocking frameworks like Moq helps a lot. Then when the test is written (and nothing compiles or passes) I hand over the keyboard to my fellow coder so that (s)he can implement the functionality. I simply tell her/him; "Make it go green!” Then we move on to the next test; I write the test, the 'soon-to-be-test-infected-dev' next to me writes the functionality.
Now, it's important to understand that at this point the dev(s) you are teaching are probably not yet convinced that this is the right way to code. The point where most devs seem to see the (green) light is when a test fails due to some code changes that they never thought would break any functionality. When the test that covers that functionality blows up, that's when you've got yourself a loyal TDD'er on your team. Or that's at least my experience, but as always; your mileage will vary :)
Quality speaks for itself. If you're more successful than everyone else, that's all you need to say.
Use a test-coverage tool. Make it very visible. That way everybody can easily see how much code in each area is passed, failed and untested.
Then you may be able to start a culture where "untested" is a sign of bad coding, "failed" is a sign of work in progress and "passed" is a sign of finished code.
This works best if you also do "test-first". Then "untested" becomes "you forgot step 1".
Of course you don't need 100% test coverage. But of one area has 1% coverage and another has 30%, you have a metric for which area is most likely to fail in production.
Lead by example. If you can get evidence that there are less regression on unit tested code that elsewhere.
Getting QA and management buy-in so that your process mandates unit testing.
Be ready to help others to get started with unit testing: provide assistance, supply a framework so that they can start easily, run an introductory presentation.
You just have to get used to the mantra "if it ain't tested, the work ain't done!"
Edit: To add some more meat to my facetious comment above, how can someone know if they're actually finished if they haven't tested their work?
Mind you, you will have a battle convincing others if time isn't allowed in the estimate for the testing of the devleoped code.
A one-to-one split for between effort for coding and that for testing seems to be a good number.
HTH
cheers,
Rob
Give compliments for one writes more test and produce good results and show the best one to others and ask them to produce the same or better result than this.
People (and processes) don't change without one or more pain points. So you need to finjd the significant pain points and demonstrate how unit testing might help deal with them.
If you can't find any significant pain points, then unit testing may not add a lot of value to your current process.
As Steve Lott implies, delivering better results than the other team members will also help. But without the pain points, my experience is that people won't change.
Two ways: convince the project manager that unit testing improves quality AND saves time overall, then have him make unit tests mandatory.
Or wait for a development crunch just before an important release date, where everyone has to work overtime and weekends to finish the last features and eliminate the last bugs, only to find they've just introduced more bugs. Then point out that with proper unit tests they wouldn't have to work like that.
Another situation where unit tests can be shown as indispensable is when a release was actually delivered and turns out to contain a serious bug due to last-minute changes.
If developers are seeing that the "successful" developers are writing unit tests, and they are still not doing it then I suggest unit tests should become part of the formal development life-cycle.
E.g. nothing can be checked in until a unit test is written and reviewed.
Probably reefnet_alex' answer helps you:
Is Unit Testing worth the effort?
I think it was Fowler who said:
"Imperfect tests, run frequently, are
much better than perfect tests that
are never written at all". I
interprate this as giving me
permission to write tests where I
think they'll be most useful even if
the rest of my code coverage is
woefully incomplete.
You mentioned that your manager is on board with unit tests. If that's the case, then why isn't he (she) enforcing it? It isn't your job to get everybody else to follow along or to teach them and in fact, other developers will often resent you if you try to push it on them. In order to get your fellow developers to write unit tests, the manager has to emphasize it strongly. It might end up that part of that emphasis is education on unit test implementation which you might end up being the educator and that's great, but management of it is everything.
If you're in an environment where the group decides the style of implementation, then you have more of a say in how the group dynamic should be. If you are in that sort of environment and the group doesn't want to emphasize unit tests while you do, then maybe you're in the wrong group/company.
I have found that "evangelizing" or preaching rarely works. As others have said, do it your way for your own code, make it known that you do it, but don't try to force others to do it. If people ask about it be supportive and helpful. Offer to do a few lunch-time seminars or informal dog and pony shows. That will do a lot more than just complaining to your manager or the other developers that you have a hard time writing tests for code they wrote.
Slow and steady - it is not going to change overnight.
Once I realized that at one place where I worked the acceptance for peer reviews improved tremendously. My group just did it and stopped trying to get others to do it. Eventually people started asking s about how we got some of the success we did. Then it was easier.
We have a test framework which includes automated running of the test suite whenever anyone commits a change. If someone commits code that fails the tests, the whole team gets emailed with the errors.
This leads to introduced bugs being fixed pretty quickly.