So.. I need to train the team on Unit Testing - could use C&C on lesson plan - unit-testing

So - management is looking to do a push to move towards doing unit-testing in all the applications moving forward - and eventually get into full TDD/Continuous Integration/Automated build mode (I hope). At this point however we are just concerned about getting everyone developing apps moving forward using unit-testing. I'd like to just start with the basics.
I won't lie - I'm by far no expert by any means in unit-testing, but I do have a good enough understanding to start the initiative with the basics, and allow us to grow togeather as a team. I'd really love to get some comments & critisism from all you experts on my plan of attack on this thing. It's a team of about 10 developers in a small shop, which makes for a great opportunity to move forward with agile development methodologies and best practices.
First off - the team consists of mainly mid level developers with a couple of junior devs and one senior, all with minimal to no exposure to unit testing. The training will be a semi-monthly meeting for about 30-60 minutes each time (probably wind up running an hour long i'd guess, and maybe have them more often). We will continue these meetings until it makes sense to stop them to allow others to catch up with their own 'homework' and experience - but the push will always be on.
Anyway - here is my lesson plan I have come up with. Well, the first two at least. Any advice from your experts out there on the actual content or structure of the lessons, etc, would be great. Comments & Critisism greatly appreciated. Thanks very much.
I apologize if this is 'too much' to post in here or read through. I think this would be a great thread for SO users looking to get into unit testing in the first place as well. Perhaps you could just skip to the 'lesson plans' section - thanks again everyone.
CLIFF NOTES - I realize this post is incredibly long and ugly, so here is the cliff notes - Lesson 1 will be 'hello world unit tests' - Lesson 2 will be opening the solution to my most recent application, and showing how to apply each 'hello world' example in real life... Thanks so much everyone for the feedback you've given me so far.. just wantd to highlight the fact that lesson 2 is going to have real life production unit tests in it, since many suggested I do that when it was my plan from the begining =)
Unit Testing Lesson Plan
Overview
Why unit test? Seems like a bunch of extra work - so why do it?
• Become the master of your own destiny. Most of our users do not do true UATs, and unfortunately tend to do their testing once in production. With unit-tests, we greatly decrease risk associated with this, especially when we create enough test data and take into account as many top level inputs as we possibly can. While not being a ‘silver bullet’ that prevents all bugs – it is your first line of defense – a huge front line, comparable to that of the SB championship Giants.
• Unit-Testing enforces good design and architecture practices. It is ‘the violent psychopath who maintains your code and knows where you live’ so to say. You simply can’t write poor quality code that is well unit-tested
• How many times have you not refactored smelly code because you were too scared of breaking something? Automated testing remove this fear, makes refactoring much easier, in turn making code more readable and easier to maintain.
• Bottom line – maintenance becomes much easier and cheaper. The time spent writing unit tests might be costly now – but the time it saves you down the road has been proven time and time again to be far more valuable. This is the #1 reason to automate testing your code. It gives us confidence that allows us to take on more ambitious changes to systems that we might have otherwise had to reduce requirements on, or maybe even not take on at all.
Terminology Review
• Unit testing - testing the lowest level, single unit of work. E.G. – test all possible code paths that a single function can flow through.
• Integration testing - testing how your units work together. E.g. – run a ‘job’ (or series of function calls) that does a bunch of work, with known inputs - and then query the database at the end and assert the values are what you expect from those known inputs (instead of having to eye-ball a grid on a web page somewhere, e.g. doing a functional test).
• Fakes – a fake is a type of object whose purpose is to use for your testing. It allows you too easily not test code that you do not want to test. Instead of having to call code that you do not want – like a database call – you use a fake object to ‘fake’ that DB call and perhaps read the data from an XML/Excel file or a mocking framework.
o Mock – a type of fake to which you make assert statements against.
o Stub – a type of fake to which you use as placeholder code, so you can skip the database call, but do not make asserts against
Lessons
Lesson one – Hello Worlds
• Hello World unit test - I will create a ‘hello world’ console application that is unit tested. Will create this application on the fly during the meeting, showing the tools in Visual Studio 2008 (test-project, test tools toolbar, etc.) that we are going to use along the way while explaining what they do. This will have only a single unit-test. (OK, maybe I won’t create it ‘on the fly’ =), have to think about it more). Will also explain the Assert class and its purpose and the general flow of a unit-test.
• Hello World, a bit more complicated. Now our function has different paths/logical branches the code can flow through. We will have ~3 unit tests for this function. This will be a pre-written solution I make before the meeting.
• Hello World, dependency injection. (Not using any DI frameworks). A pre-written solution that builds off the previous one, this time using dependency injection. Will explain what DI is and show a sample of how it works.
• Hello World, Mock Objects. A pre-written solution that builds off the previous one, this time using our newly added DI code to inject a mock object into our class to show how mocking works. Will use NMock2.0 as this is the only one I have exposure to. Very simple example to just display the use of mock objects. (Perhaps put this one in a separate lesson?).
• Hello World, (non-automated) Integration Test. Building off the previous solution, we create an integration test so show how to test 2 functions together, or entire class together (perhaps put this one in a separate lesson?)
Lesson two – now we know the basics – how to apply this in real life?
• General overview of best practices
o Rule #1- Single Responsibility Principal. Single Responsibility Principal. Single Responsibility Principal. Facetiously stating that this is the single most important thing to keep in mind while writing your code. A class should have only one purpose; a function should do only one thing. The key word here is ‘unit’ test – and the SRP will keep your classes & functions encapsulated into units.
o Dependency Injection is your second best friend. DI allows you to ‘plug’ behavior into classes you have, at run time. Among other things, this is how we use mocking frameworks to make our bigger classes more easily testable.
o Always think ‘how will I test this’ as you are writing code. If it seems ‘too hard to test’, it is likely that your code is too complicated. Re-factor until it into more logical units of classes/functions - take that one class that does 5 things and turn it into 5 classes, one which calls the other 4. Now your code will be much easier to test – and also easier to read and refactor as well.
o Test behavior, not implementation. In nutshell, this means that we can for the most part test only the Public functions on our classes. We don’t care about testing the private ones (implementation), because the public ones (behavior) are what our calling code uses. For example... I’m a millionaire software developer and go to the Aston Martin dealership to buy myself a brand new DB9. The sales guy tells me that it can do 0-60 in 3 seconds. How would you test this? Would you lift the engine out and perform diagnostics tests, etc..? No... You would take it onto the parkway and do 160 MPH =). This is testing behavior vs. implementation.
• Reviewing a real life unit-tested application. Here we will go over each of the above ‘hello world’ examples – but the real life versions, using my most recent project as an example. I'll open a simple unit test, a more complex one, one that uses DI, and one that uses Mocks (probably coupled to the DI one). This project is fairly small & simple so it is really a perfect fit. This will also include testing the DAL and how to setup a test database to run these tests against.

I like the commitment and thoroughness you're expressing here, but my feeling is your adoption plan is going to take longer this way than if you started straight in, paired up and wrote some actual production tests. You'll need infrastructure anyway -- why not bootstrap your actual production code with a CI server and a TDD framework? (A lot of times this is a bigger pain point than learning "asserts." Get this part out of the way sooner rather than later). Then sit down and solve an actual problem with a fellow coder. Pick a simple bug or small feature and try to identify what the failing tests would look like and try to write them.
I like Hello Worlds for a brown bag lunch or something. But I can't think of any reason not to just jump right in and solve some problems.

I helped design and run a Test Driven Development training in house at my company for Java developers. We ended up doing it as a full day training, and we broke it up similar to how you have here.
Why do testing?
Simple example.
More complex example.
One thing I must stress though, is people need to learn by doing.
For my training, we set up a lab environment where each student could start with the same snapshot of code, and develop and run the tests themselves, with instructors walking around the training room helping individuals who were confused or weren't getting it.
For the "Simple Example" we had a "cooking show" version of code that was on a projector and went through the TDD process step by step. Developers would have to go through the process of writing a test, then creating implementation that was just sufficient to pass the test, then repeat. At each phase, a prepared solution to the current phase was shown on the projector after the students had some time to try it on their own.
For the "Complex Example" we created a set of requirements, and then allowed the students to come up with their own solutions using TDD to do so. We even had an exercise where requirements surprise, surprise changed suddenly part way through the exercise.
I like your idea of doing it over a longer period of time with regular checkups. One downfall of our training was a lack of followup. Developers benefited by the training, I'm sure, but many I think, did not take the practice back to their ordinary work. Regular checkups would help instill unit testing as a matter of habit.
For more ideas, check out my answer to this question.

Getting people interested in unit testing on someone else's/sample code will not be as effective as having folks "start with the testcase" on some new code they have to write.
Cover the basics of starting with a test case and triangulating to a valid result (all the while emphasizing that a failing test is progress).
That said, my personal experience is that things like this are better done in a pair programming environment. It's extra work for you, but the combination of one-on-one, some ownership and a working example they can reference at the end of it will make adoption much easier.
Getting a coverage tool running at some point later on can, given the right environment, provide a fun, competitive and gratifying way of marking progress. Making this in any way part of compensation/bonuses is a real bad idea. Done correctly, folks will adopt it because it works.

I tried to introduce TDD in a small team (7-9 programmers). Lectures failed. People are skeptical, TDD sounds like snake oil. Plus, there's always the proverbial Willy.
In the end, instead of TDD, people on the team do occasional DDT (Development-Driven Tests): the write unit tests after the code to confirm that it does what they meant it to. Looks like utter failure until you realize that even this form of developer-driven QA is much better than what you had before because, even if it doesn't really improve the code for the future revisits, it cuts into deployed bugs.
There was, however, one key difference compared to your setting: the push didn't come from the management, they were just good enough to let the programmers decide for themselves.
The lectures failed, but I didn't put all the eggs in a single basket. I wrote a bunch of tests for many of the smaller, more focused functions and classes used across the project. Those little buggers you occasionally need to bend a bit to fit exactly in the next call site, hoping you didn't break any of the existing ones (you looked around the code, the change seemed safe). It was running the tests after every other svn up (not only by me), and the subsequent replies to the guilty commit emails "you broke it, pls fix ASAP!" that finally convinced them of at least partial utility of unit tests.
No matter what examples you come up with for the lectures, they'll tell you it only worked because the CUT was unrealistically (or unusually) isolated from the rest of the code. They'll ask you to write unit tests for a convoluted, buggy God Class (preferably a Singleton, everybody loves global variables, no?) because, to the best of their knowledge, that's what real-world code looks like, and they think it'll look like that with TDD as well.
Screw lectures. Have the management buy everybody a few books for the Christmas: at the very least TDD By Example (Beck) and Refactoring (Fowler). Speaking from experience, the books won't necessarily have any impact with most of them (more of the "that wasn't real-world enough" bullshit), but it's bound to stick to someone, and, no offense intended, I'd bet $1000 that Beck is the better evangelist of you two. ;) Practice TDD yourself in code you share with them. If it really works, they'll follow. Sss-looowww-lyyyy, but they will.
Then again, maybe you don't have that many Willies on your team, maybe your colleagues are eager, just in need of a leader, I don't really know, you didn't tell.
First commandment: don't be put off by slow and/or partial intake: every inch counts! (I'm preaching to myself here, ya know...)

I think it will be harder to start with test-after type of testing. A major difference compared to test-first is that test-after does not guide you in writing code which is easy to test. Unless you have first done TDD, it will be very hard to write code for which it's easy to write tests.
I recommend starting with TDD/BDD and after getting the hang of that, continue learning how to do test-after on legacy code (also PDF). IMO, doing test-after well is much harder than test-first.
In order to learn TDD, I recommend finishing this tutorial. That should get you started on what kind of tests to write.

I think it's a good outline for a session or two. But I'd strongly recommend standing up a testing infrastructure first and setting the developers up for early success, rather than shoving the developers head-first into the unit testing world and shouting "ready, set, GO!"
Assuming you develop in an IDE, a one-click test generator tool will really help with step one, test creation. You'll want to integrate a GUI-based test runner with your IDE, and you really need an automated test runner integrated with your CI builds. Having a code coverage tool integrated with the test runner tool will help developers increase their coverage. You need to provide developers with a complete framework for writing the tests, and that includes documenting how to organize the test source code (test naming standards, project settings, folder locations, etc.) Tools that automate any of the manual steps will go a long way. Provide a mock framework including mock objects for any in-house or third-party library APIs you depend on. If you're going to recommend a database full of fake data (instead of a suite of fake data access objects) create one with a few key tables and populate it with test data, and provide whatever it takes to reset it between testing runs. If not, provide a few fake DAOs of your own to serve as examples. You should have a small suite of tests already in your source control system testing production code to show as examples. And you need to have all this documented so you can hand it out in meeting #1. (Don't forget to test the document!)
One of our teams tried the ad hoc approach a couple years ago, but adoption was poor. Without the definitions or the organization of the tests, individual developers kind of did their own thing. Lack of a naming standard made it difficult to identify the right tests to run for which modules, and few people bothered. We had no IDE integration for any of it. Nobody wanted to write mocks, and we didn't provide mocks in advance for the framework we use. And it's really hard to add tests to legacy code that wasn't written with Dependency Injection in mind.
I'm now trying to get our infrastructure reorganized and ready for another shot at the task. Only after it's standing up will I work on trying to get the developers to use it again.
However, you have the one thing we were kind of lacking last time, and that's strong management support. Our previous managers were kind of lukewarm to the idea because they didn't want to delay coding by having to write all these tests. We now have new management and they are focusing on code quality -- automated unit tests are now in vogue, so it's a good time for us to try again.

Is a classroom style approach going to work well? That's a question that comes to my mind as what may happen here is so much of a separation between knowledge and application of the ideas that while they may know it, they don't quite get how to use it.
Just to give a different approach, one could jump into whatever projects are going on and start trying to apply some of this which may take a while to trickle down through everyone as the idea would be that you'd train one and then as a pair you'd train another pair and so on, to try to get everyone up to the same point. This way the theoretical stuff that isn't relevant gets taken out early.
Another idea would be to see if it makes sense to try to form teams within the group of 10, such as 2 groups of 3 and a group of 4 or 5 pairs, so that each person can present what they've done and how well do they use this.
I'd second the boundary testing, as well as showing why it can be useful to test with invalid input to see that this doesn't cause the system to roll over and die.
Lastly, it may make sense to have time devoted to handling questions and other 1:1 issues that people have as not everyone will want to ask questions in front of a large group.
EDIT: A caution on Lesson 2, which is a good idea but has a few hazards to note. First, be humble in doing this lesson so that you aren't coming across as a show-off wanting to slam everyone else. Second, I'd suggest leaving some tests incomplete when going over things so that there is an interactive aspect here that prevents the other developers from merely tuning out or rolling their eyes. Third, a follow-up lesson or two of others showing what they did or you going over what they did may be more useful to show that this is a team effort in some ways.

If management is serious about this, then they have to take the bull by the horns and take action themselves not just have a little training. Start having them ask to see the tests in code reviews. Once this is common and people know they willnot pass the code reveiw without a test, then have them refuse to deploy new code until there are tests written. It will only take a couple of iterations of this before people will know that they can't get away wihtout writing the tests and then they will do so fairly reliably. Yes the first time the manager refuses to deploy, something will probably be late. They have to take that into accound in their own planning. They also will need to start adding test writing time to their task estimating. No one wants to do this if they think they will be blamed because it takes longer to write tests and code than just to write code. Have a published schedule for how this will be implemented (you need to give them some time to learn how to do this before it is required for all new development) and on the drop dead date, nothing gets to prod without tests.

I like it. One comment I have is it's probably worth dropping in some guidelines/advice on how to go about testing threaded/asynchronous code.
Also, I'd make a comment about boundary testing i.e. using unit tests to state your assumptions on any 3rd party API you utilize.

Just because I have been doing something where TDD (and unit tests in general) has been invaluable and because it might make a good demo, I'd suggest an example using Regex, especially if you aren't a Regex expert, it's quite easy to make mistakes.
It'd also make for a good 'show, not tell' demo. If you just:
step 1: Tell them just to write some code with a Regex to match a certain pattern.
step 2: expand the pattern to something else.
step 3: show how unit tests immediately let them be sure that a) it matches the new pattern and b) it hasn't broken any of the previous matches or let through results that shouldn't be.
In general C&C of your review, it's hard to get an idea of exactly what sequence events will go in. Also, semi-month seems far too infrequently if there's just a void inbetween with no push to apply their knowledge. The last thing you want is it to become some meeting they go to and forget about as soon as they are out of the room!
Do you have sufficient downtime in your current workload to address technical debt and get the team to actually pair up and start looking at adding unit tests to some existing functionality and refactoring where necessary to allow mocking?
I would say that will prove far more effective than lectures and potted workshops, YMMV.

Related

What is test-driven development (TDD)? Is an initial design required?

I am very new to test-driven development (TDD), not yet started using it.
But I know that we have to write tests first and then the actual code to pass the test and refactor it till the design is good.
My concern over TDD is where it fits in our systems development life cycle (SDLC).
Suppose I get a requirement of making an order processing system.
Now, without having any model or design for this system, how can I start writing tests?
Shouldn't we require to define the entities and their attributes to proceed?
If not, is it possible to develop a big system without any design?
There is two levels of TDD, ATDD or acceptance test driven development, and normal TDD which is driven by unit tests.
I guess the relationship between TDD and design is influenced by the somewhat "agile" concept that source code IS the design of a software product. A lot of people reinforce this by translating TDD as Test Driven Design rather than development. This makes a lot of sense as TDD should be seen as having a lot more to do with driving the design than testing. Having acceptance and unit tests at the end of it is a nice side effect.
I cannot really say too much about where it fits into your SDLC without knowing more about it, but one nice workflow is:
For every user story:
Write acceptance tests using a tool like FitNesse or Cucumber, this would specify what the desired outputs are for the given inputs, from a perspective that the user understands. This level automates the specifications, or can even replace specification documentation in ideal situations.
Now you will probably have a vague idea of the sort of software design you might need as far as classes / behaviour etc goes.
For each behaviour:
Write a failing test that shows how calling code you would like to use the class.
Implement the behaviour that makes the test pass
Refactor both the test and actual code to reflect good design.
Go onto the next behaviour.
Go onto the next user story.
Of course the whole time you will be thinking of the evolving high level design of the system. Ideally TDD will lead to a flexible design at the lower levels that permits the appropriate high design to evolve as you go rather than trying to guess it at the beginning.
It should be called Test Driven Design, because that is what it is.
There is no practical reason to separate the design into a specific phase of the project. Design happens all the time. From the initial discussion with the stakeholder, through user story creation, estimation, and then of course during your TDD sessions.
If you want to formalize the design using UML or whatever, that is fine, just keep in mind that the code is the design. Everything else is just an approximation.
And remember that You Aren't Gonna Need It (YAGNI) applies to everything, including design documents.
Writing test first forces you to think first about the problem domain, and acts as a kind of specification. Then in a 2nd step you move to solution domain and implement the functionality.
TDD works well iteratively:
Define your initial problem domain (can be small, evolutionary prototype)
Implement it
Grow the problem domain (add features, grow the prototype)
Refactor and implement it
Repeat step 3.
Of course you need to have a vague architectural vision upfront (technologies, layers, non-functional requirement, etc.). But the features that bring added-value to your your application can be introduced nicely with TDD.
See related question TDD: good for a starter?
With TDD, you don't care much about design. The idea is that you must first learn what you need before you can start with a useful design. The tests make sure that you can easily and reliably change your application when the time comes that you need to decide on your design.
Without TDD, this happens: You make a design (which is probably too complex in some areas plus you forgot to take some important facts into account since you didn't knew about them). Then you start implementing the design. With time, you realize all the shortcomings of your design, so you change it. But changing the design doesn't change your program. Now, you try to change your code to fit the new design. Since the code wasn't written to be changed easily, this will eventually fail, leaving you with two designs (one broken and the other in an unknown state) and code which doesn't fit either.
To start with TDD, turn your requirements into test. To do this, ask "How would I know that this requirement is fulfilled?" When you can answer this question, write a test that implements the answer to this question. This gives you the API which your (to be written) code must adhere to. It's a very simple design but one that a) always works and b) which is flexible (because you can't test unflexible code).
Also starting with the test will turn you into your own customer. Since you try hard to make the test as simple as possible, you will create a simple API that makes the test work.
And over time, you'll learn enough about your problem domain to be able to make a real design. Since you have plenty of tests, you can then change your code to fit the design. Without terminally breaking anything on the way.
That's the theory :-) In practice, you will encounter a couple of problems but it works pretty well. Or rather, it works better than anything else I've encountered so far.
Well of course you need a solid functional analysis first, including a domain model, without knowing what you'll have to create in the first place it's impossible to write your unit tests.
I use a test-driven development to program and I can say from experience it helps create more robust, focussed and simpler code. My recipe for TDD goes something likes this:
Using a unit-test framework (I've written my own) write code as you wish to use it and tests to ensure return values etc. are correct. This ensures you only write the code you're actually going to use. I also add a few more tests to check for edge cases.
Compile - you will get compiler errors!!!
For each error add declarations until you get no compiler errors. This ensures you have the minimum declarations for your code.
Link - you will get linker errors!!!
Write enough implementation code to remove the linker errors.
Run - you unit tests will fail. Write enough code to make the test succeed.
You've finished at this point. You have written the minimum code you need to implement your feature, and you know it is robust because of your tests. You will also be able to detect if you break things in the future. If you find any bugs, add a unit test to test for that bug (you may not have thought of an edge case for example). And you know that if you add more features to your code you won't make it incompatible to existing code that uses your feature.
I love this method. Makes me feel warm and fuzzy inside.
TDD implies that there is some existing design (external interface) to start with. You have to have some kind of design in mind in order to start writing a test. Some people will say that TDD itself requires less detailed design, since the act of writing tests provides feedback to the design process, but these concepts are generally orthogonal.
You need some form of specification, rather than a form of design -- design is about how you go about implementing something, specification is about what you're going to implement.
Most common form of specs I've seen used with TDD (and other agile processes) are user stories -- an informal kind of "use case" which tends to be expressed in somewhat stereotyped English sentences like "As a , I can " (the form of user stories is more or less rigid depending on the exact style/process in use).
For example, "As a customer, I can start a new order", "As a customer, I can add an entry to an existing order of mine", and so forth, might be typical if that's what your "order entry" system is about (the user stories would be pretty different if the system wasn't "self-service" for users but rather intended to be used by sales reps entering orders on behalf of users, of course -- without knowing what kind of order-entry system is meant, it's impossible to proceed sensibly, which is why I say you do need some kind of specification about what the system's going to do, though typically not yet a complete idea about how it's going to do it).
Let me share my view:
If you want to build an application, along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc. on the other hand TDD changes your mindset.
Commonly, you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. this is a Syntax debugging driven development . but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
In practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)

Forcing Unit Testing on Developers

First a little background. The company I work for writes web based software that is a hosted solution for our customers (ie ASP (Application Service Provider)). We are adopting agile practices such as Scrum and we execute sprints to build new features for our product.
I am a proponent of TDD (Test Driven Design), and as a part of what I deliver in a sprint I always write tests and I always get them integrated with the build (ie ccnet); however the other developers do not follow this practice and it is not enforced.
Is it a good practice to force a development group into providing unit tests as a part of what is delivered in a sprint?
Unless you are a position of authority, the best thing you can do is to convince them of the value of the test suite.
It's very difficult to get developers to see the light on this issue if they aren't seeing it done right.
Try to pair with another developer and show them the benefits and the clarity that comes from writing the tests FIRST. If you don't do this, they are likely to write all of their code, get it working, and then write tests. So, from their point of view, it will feel like simply an extra task that doesn't help them get things done.
Also keep in mind that people often do not understand how to write good tests. Even more, some do not know how to make use of tools like jmock, which can lead to them getting stuck and giving up on writing a test.
Forcing anything onto anybody is not a good practise in my view. I would show them the benefits of TDD at every opportunity I get. This should automatically get the rest of the team to voluntarily practise TDD.
You don't want lip-service unit testing, you want whole-hearted unit testing. That isn't something that can be forced. What you need to do is influence your teammates over time to see the benefits of unit testing and to develop a unit testing culture.
To start with you need to understand that different people change for different reasons. In Crossing the Chasm terms, visionaries will adopt new techniques because they are better, but pragmatists adopt new techniques either because they solve a problem/pain the currently have or because everyone else is adopting it.
Your mission then it to show how unit testing can solve a pain your team currently feels. As you win over people one-by-one eventually you can reach a tipping point where unit testing is the norm and everyone goes along with it. However if you can't tie unit testing to a pain your team feels then your efforts to convince them will likely fail.
Considering unit-testing improves code and software quality on a long term, I would say that, yes, it is good practice to have your developpers provide unit-tests -- be it part of some kind of sprint or not.
The two main barriers to unit-tests I've seen are :
the developpers don't get the point : "why would we write more code just to test ?"
"we don't have time to write unit-tests"
To answer the first point, you'll have to provide some sort of demonstration / formation, I suppose ; if you can get developpers to see why unit-tests are useful, they will like them, and use/develop them ; but they need to seen why those are useful : unless you are their boss, you cannot force people to develop unit-tests.
And, even if you are their boss, they will probably not do the best possible job if they are being forced : unit-testing is often done better if people understand why and how !
To answer the second point... Well, you obviously need to get your developpers some "special" time to developp unit-tests ; it can mean less time to do manual testing, btw.
Another thing is : it is hard to know "what to test", and "how to test" : you will need to explain / demonstrate that to your colleagues : some things cannot be tested, some things don't need to be, and some things are not "unit-testable" -- well, I suppose, unless you software is really well engineered ^^
I've gotten in that position on many jobs and contracts in the past, so I've finally gotten discouraged and embraced the darkness by advocating Development Driven Development.
I've found that when most managers embrace XP, they're embracing throwing out the documentation, not really doing TDD. Programmers on most teams are rewarded for quick hacks that get the defect out of their queue, and it's one manager in about ten who has the guts to stand up to senior management as an advocate of the overall quality of the product as opposed to the bottom-line-for-this-quarter way of doing business. After all, most software jobs that pay anything are corporate sponsored, and Freud proved that corporations are insane.
Or at least, he should have.
There is a great Joel on Software article on this called Getting Things Done When You're Only a Grunt. His strategies applied to united testing would be the following:
Just Do It
Most important, I think. Regularly write tests, do not make it appear as if you yourself see them as a minor part of development.
Harness the Power of Viral Marketing
If you see an error in someone else's code, write a test that triggers it and present both to him. Maybe he sees your point.
Create a Pocket of Excellence
Identify the team members who are open to the idea but not quite sure how to work with unit tests and set up a number of test classes until everyone of them knows how to do it. Then turn towards the other ones. Things are a lot easier to establish once you are not the only one anymore.
Neutralize The Bozos
There will be team members who are almost impossible to get to write tests. Maybe those should be dealt with by regularly breaking their code with a new commit - and then pointing out that since they don't have tests it was hard for you to notice.
it depends on your Version Control, but there are Version Control Software that enable you to run scripts before check-in or before merge to the Production branch. and it will not let the developer check in if the unit testing is failed.
First, talk to your manager. If he is convinced that testing is a good thing, add a coverage test to your build system. If the coverage of your unit tests falls below a certain level, you can handle it as a failed build. This gives your colleagues a measure, a way to see when the fail to deliver tested code.
In your case, NCover seems to integrate nicely into CC.NET.
Create a matrix that you want developers to achieve.
Make sure there is a subtask for Unit test creation for every story.
No. In my experience, TDD isn't all that useful in practice. I sometimes use it for really general fundamental classes (like geometry or generic data structures) that lend themselves naturally to automated tests. But for UI components or business logic, I find it's more trouble than it's worth.

Why should I use Test Driven Development? [duplicate]

This question already has answers here:
Closed 13 years ago.
Duplicate:
Why should I practice Test Driven Development and how should I start?
For a developer that doesn't know about Test-Driven Development, what problem(s) will be solved by adopting TDD?
[EDIT] Let's assume that the developer already (ab)uses a unit testing framework.
Here are three reasons that TDD might help a developer/team:
Better understanding of what you're going to write
Enforces the policy of writing tests a little better
Speeds up development
One reason to write the tests first is to have a better understanding of the actual code before you write it. To me, this is the main benefit of test driven development. When you write the test cases first, you think more critically about the corner cases. It's then easier to address them when you write the code and ensure that they're accurate.
Another reason is to actually enforce writing the tests. Often when people do unit-testing without the TDD, they have a testing framework set up, write some new code, and then quit. They think that the code already works just fine, so why write tests? It's simple enough that it won't break, right? But now you've lost the advantages of doing unit-tests in the first place (completely different discussion). Write them first, and they're already there.
Writing these tests first could mean that you don't need to launch the program in a debugging environment (slow — especially for larger projects) to test if a few small things work. Of course there's no excuse for not doing so before committing changes.
Convincing yourself or other people to write the tests first may be difficult. You may have better luck getting them to write both at the same time which may be just as beneficial.
Presumably you test code that you've written before you commit it to a repository.
If that's not true you have other issues to deal with.
If it is true, you can look at writing tests using a framework as a way to automate those main routines or drivers that you currently write so you can run all of them automatically at the push of a button. You don't have to pore over output to decide if the test passed or failed; you embed the success or failure of the test in the code and get a thumbs up or down decision right away. Running all the tests at once reduces the chances of a "whack a mole" situation where you fix something in one class and break something else. All the tests have to pass.
Sounds good so far, yes?
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
If you're already using a test framework, getting good value out of the tests you write, and have meaningful code coverage up around 70%, then I think you're doing well. I'm not sure that TDD will give you much more value. It's up to you to decide whether or not you go that extra mile. Personally, I don't do it. I write tests after the class and refactor if I feel the need. Some people might find it helpful to write the test first knowing it'll fail, but I don't.
(This is more of a comment agreeing with duffymo's answer than an answer of its own.)
duffymo answers:
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
I think it's actually to force coders to think about what their code is doing. Having to think about a test makes one consider what the code is supposed to do: what the pre-conditions and post-conditions are, which functions are primitive and which are composed of primitive functions, what the minimal necessary public interface is, and what's an implementation detail.
These are all things I routinely think about, so like you, "test first" doesn't add a whole lot, for me. And frankly (I know this is heresy in some circles) I like to "anchor" the core ideas of a class by sketching out the public interface first; that way I can look at it, mentally use it, and see if it's as clean as I thought it was. (A class or a library should be easy and intuitive for client programmers to use.)
In other words, I do what TDD tries to ensure happens by writing tests first, but like duffymo, I get there a different way.
And the real point of "test first" is to get a coder to pause and think like a designer. It's silly to make a fetish of how the programmer enters that state; for those who don't do it naturally, "test first" serves as a ritual to get them there. For those who do, "test first" doesn't add much -- and can get in the way of the programmer's habitual way of getting into that state.
Again, we want to look at results, not rituals. If a junior guy needs a ritual, a "stations of the cross" or a rosary* to "get in the groove", "test first" serves that purpose. If someone has their own way to get there, that's great too.
Note that I'm not saying that code shouldn't be tested. It should. It gives us a safety net, which in turn allows us to concentrate our attention on writing good code, even audacious code, because we know the net is there to catch errors.
All I am saying is that fetishistic insistence on "test first" confuses the method (one of many) with the goal, making the programmer think about what he's coding.
* To be ecumenical, I'll note that both Catholics and Muslims use rosaries. And again, it's a mechanical, muscle-memory way to put oneself into a certain frame of mind. It's a fetish (in the original sense of a magic object, not the "sexual fetish" meaning) or good-luck charm. So is saying "Om mani padme hum", or sitting zazen, or stroking a "lucky" rabbit's foot, (Not so lucky for the rabbit.) The philosopher Jerry Fodor, when thinking about hard problems, has a similar ritual: he repeats to himself, "C'mon, Jerry, you can do it!" (I tried that too, but since my name is not Jerry, it didn't work for me. ;) )
Ideally:
You won't waste time writing features you don't need. You'll have a comprehensive unit test suite to serve as a safety net for refactoring. You'll have executable examples of how your code is intended to be used. Your development flow will be smoother and faster; you'll spend less time in the debugger.
But most of all, your design will be better. Your code will be better factored - loosely coupled, highly cohesive - and better formed - smaller, better-named methods & classes.
For my current project (which runs on a relatively heavyweight process), I have adopted a peculiar form of TDD that consists of writing skeleton test cases based on requirements documents and GUI mockups. I write dozens, sometimes hundreds of those before starting to implement anything (this runs totally against "pure" TDD which says you should write a few tests, then immediately start on a skeleton implementation).
I have found this to be an excellent way to review the requirements documents. I have to think about the behaviour described in them much more intensively than if I just were to read them . In consequence, I find many more inconsistencies and gaps in them which I would otherwise only have found during implementation. This way, I can ask for clarification earlier and have better requirements when I start implementing.
Then, during implementation, the tests are a way to measure how far I've yet to go. And they prevent me from forgetting anything (don't laugh, that's a real problem when you work on larger use cases).
And the moral is: even when your dev process doesn't really support TDD, it can still be done in a way, and improve quality and productivity.
I personally do not use TDD, but one of the biggest pro's I can see with the methology is that customer satisfaction ensurance. Basically, the idea is that the steps of your development process are these:
1) Talk to customer about what the application is supposed to do, and how it is supposed to react to different situations.
2) Translate the outcome of 1) into Unit Tests, which each test one feature or scenario.
3) Write simple, "sloppy" code that (barely) passes the tests. When this is done, you have met your customer's expectations.
4) Refactor the code you wrote in 3) until you think you've done it in the most effective way possible.
When this is done you have hopefully produced high-quality code, that meets your customer's needs. If the customer now wants a new feature, you start the cycle over - discuss the feature, write a test that makes sure it works, write code that passes the test, refactor.
And as others have said, each time you run your tests you ensure that the old code still works, and that you can add new functionality without breaking old one.
Most of the people I have talked to don't use a complete TDD model. They usually find the best testing model that works for them. Find yours play with TDD and find where you are the most productive.
TDD (Test Driven Development/ Design) provides the following advantages
ensures you know the story card's acceptance criteria before you start
ensures that you know when to stop coding (i.e., when the acceptance criteria has been meet thus prevents gold platting)
As a result you end up with code that is
testable
clean design
able to be refactored with confidence
the minimal code necessary to satisfy the story card
a living specification of how the code works
able to support a sustainable pace of new features
I made a big effort to learn TDD for Ruby on Rails development. It took several days before I really got into it and it. I was very skeptical but I made the effort because programmers I respect support it.
At this point I feel it was definitely worth the effort. There are several benefits which I'm sure others will be happy to list for you. To me the most important advantage is that it helps avoid that nightmare situation late in a project where something suddenly breaks for no apparent reason and then you're spending a day and a half with the debugger. It helps prevent your code base from deteriorating as you add more and more logic to it.
It is common knowledge that writing tests and having a large number of automated tests are a Good Thing.
However, without TDD, it often just becomes tedious. People write tests, and then leave it, and the tests do not get updated as they should, nor do new features get tested as often as they should either.
A big part of this is because the code has become a pain to test - TDD will influence your design so that it is much easier to test. Because you've used TDD, you have a good number of tests, which makes it much easier to find regressions whenever your code or requirements change, simplifying debugging drammatically, causing an appreciation of good TDD and encouraging more tests to be written when changes are needed - and we're back to the start of the cycle.
There are many advantages:
Higher code quality
Fewer bugs
Less wasted time
Any of those alone would be sufficient justification to implement TDD.

Getting started in Unit Testing as a group in these economic times

We have a group of a few developers and some business analysts. We as developers would like to start adding unit testing as part of our coding practices so that we can deliver maintainable and extensible code, especially since we will also be the ones supporting and enhancing the application in the future. But in this economic downturn we are struggling with the push to get started because we are challenged to just deliver solutions as fast as possible, with quality not being the top priority. What can we do or say to show that we will be able to deliver faster and with higher quality, as well as preparing for future enhancements.
Basically we just need to get over the learning curve of incorporating unit testing into our daily work, but we cannot do that now because it is viewed as an unnecessary overhead that would delay our projects that the business needs now.
We as developers want to provide the highest value to the business, especially quickly, but we know that we will also need to do this 6 months from now and we need to plan for that as well, and we believe that unit testing will help us greatly down the line.
EDIT
All awesome input, thank you. I personally know how to write unit test, but I don't have the experience in me to say whether or not that unit test is good. I have just ordered Test Driven Development: By Example and will take the initiative to get the ball rolling on incorporating unit testing in our group.
You need to just start doing it, with or without permission. In the end it will make you more productive and increase your code quality. You can start small by including units for something critical and once you've shown the benefits, you're in.
Start unit testing the functions or classes when you create them. Begin with simple classes/functions that do not have external dependencies (DB, file system).
Share your progress inside the team. Count the number of tests and display a big chart showing your progress (unless the management/analysts are very hostile against unit testing).
Read about TDD: "Test-Driven Development : by example". Writing the tests first leads to code that can be easily tested. If you write the production code first, you may have hard time putting it under tests.
I would like to recommend the book
Pragmatic Software Testing
The book Pragmatic Unit Testing is also a good book, although it focuses more on C# and NUnit, there are topics which are applicable language independent
Just do it...
If it's part of your personal process for any new code, you'll at least have that covered. You'll probably never get permission to add unit tests to cover all your code ever but you can at least be sure that no future changes undo work from a given point in time.
Think of it from the business side, why do you need to write code to prove the code you've already written is correct? Why is it wrong?
Getting 100% coverage would be nice but take your time getting there and don't write tests for existing code that isn't currently wrong; write the tests as it breaks so you at least never undo something unintentionally.
You don't even need to discuss it with management (though the situation you describe about it is far from ideal). It's like Design by Contract - I introduced it in previous code I worked in, just as part of the development process. Once it's in, and it works, trust me, noone will dare to remove it.
Unit testing can also be seen as a part of development. Hence, if you have "20 days" for developing features A, B and C, you can typically include unit tests in your estimations for development itself.
Making unit tests is surprisingly easy. Compared with multithreading problems or any sort of complex design, unit testing is very easy to grasp for any competent developer.
You can read good literature about it (you have dozens of tutorials online) in, say, half a day, and start doing your first tests in the afternoon.
Really - just do it!
I know this may not be readily available to you, but I believe getting someone who's test infected on your team to show you the way is the easiest way to maintain productivity when introducing testing. Your people obviously need to know the concepts, which they can do by reading books or attending user groups. A test infected developer can show you how to do it in your practical everyday work with an absolute minimal loss in productivity whilst doing so. It wouldn't surprise me if your speed increased immediately, but I wouldn't make any claims of that sort. With test experience also comes knowledge of testable designs, which I think is key.

What is the most convincing way to require formalized unit testing?

This certainly presupposes that unit testing is a good thing. Our projects have some level of unit testing, but it's inconsistent at best.
What are the most convincing ways that you have used or have had used with you to convince everyone that formalized unit testing is a good thing and that making it required is really in the best interest of the 'largeish' projects we work on. I am not a developer, but I am in Quality Assurance and would like to improve the quality of the work delivered to ensure it is ready to test.
By formalized unit tests, I'm simply talking about
Identifying the Unit Tests to be written
Identifying the test data (or describe it)
Writing these tests
Tracking these tests (and re-using as needed)
Making the results available
A very convincing way is to do formalized unit test yourself, regardless of what your team/company does. This might take some extra effort on your side, especially if you're not experienced with this sort of practice.
When you can then show your code is better and you are being more productive than your fellow developers, they are going to want to know why. Then feed them your favorite unit testing methods.
Once you've convinced your fellow developers, convince management together.
I use Maven with the Surefire and Cobertura plugins for all my builds. The actual test cases are created with JUnit, DbUnit and EasyMock.
Identifying Unit Tests
I try to follow Test Driven Development but to be honest I usually just do that for the handful of the test cases and then come back and create tests for the edge and exception cases later.
Identifying Test Data
DbUnit is great for loading test data for your unit tests.
Writing Test Cases
I use JUnit to create the test cases. I try to write self documenting test cases but will use Javadocs to comment something that is not obvious.
Tracking & Making The Results Available
I integrate the unit testing into my Maven build cycle using the Surefire plugin and I use the Corbertura plugin to measure the coverage achieved by those tests. I always generate and publish a web-site including the Surefire and Cobertura reports as part of my daily build so I can see what tests failed/passed.
The event which convinced me was when we managed to regress a bug three times, in three consecutive releases. Once I realised how much more productive I was as a programmer when I wasn't constantly fixing trivial mistakes after they had gone to the client, and I could have a warm fuzzy feeling that colleagues code would do what they claimed it would, I became a convert.
Back in the day I did Cobol development on Mainframes we did this religiously in the several companies I worked in and it was accepted as the way you did things because the environment enforced it. I think it was a very typical scheme for the era and maybe some of the reasons might be applicable to you:-
Like most mainframe environments we had three realms, development, Quality Assurance and Production. Programmers developed in development and unit tested there, and once they signed off and were happy the unit was migrated to the QA environment (with the test and results docs) where it was system tested by dedicated QA staff. The development to QA migration was a formal step which happened overnight. Once QA'ed the code was migrated to Production - and we had very few bugs.
The motivation to get the unit testing done and right was that if you didn't and a bug was found by QA staff it was obvious that you hadn't done the work. Consequently your reputation depended on how rigorous you were. Of course most people would end up with the occasional bug, but coders who produced solid tested code all the time soon got a star reputation and those who produced buggy code got noticed too. The push would be always to up your game, and consequently the culture produced was one that pushed towards bug free code delivered first time.
Extracting pertinent points -
Coder reputation tied up with delivery of bug free tested code
Significant overhead associated with moving unit tested code to the next level, so motivation not to repeat this and get it right first time.
System testing performed by different people to unit testing - ideally a different team.
I'm sure your environment will differ but the principals might be translatable.
Sometimes by example is the best way. I also find that reminding people that certain things just dont happen when things are under test. Next time somebody asks you to write something, do it with tests regardless. Eventually your peers will be jealous of the ease by which you can change your code and know that it still works.
As for management you need to emphasise how much time gets wasted due to the nuclear explosion that occurs when you need to make a change to codebase X that isnt under test.
Many developers dont realise just how much they refactor without ensuring they are preserving behaviour across the entire system. For me this is the biggest benefit to unit testing and TDD in my opinion.
Software requirements change
Software changes to suit the requirements
The only certainty is change. Changing code that is not under test requires the developer to be aware of every behavioural side effect possible. The reality is that the coders who think they can read into every permutation, does so by a pain staking process of trial and error until nothing breaks obviously. At this point they check in.
The pragmatic programmer recognizes that he/she is not perfect and all knowing, and that tests are like a safety net that allows them to walk the refactoring tightrope quickly and safely.
As for when to write test on greenfield code, I'd have to advocate as much as possible. Spend the time defining the behaviours that you want out of your system and write tests initially to express those higher level constructs. Unit tests can come as thoughts crystallize.
Hope this helps.
Education and/or certification.
Give your team members a formal training in the field of testing - maybe with certification exam (depending on your team members and your own attitude towards certification). You'll take testing to a higher level that way, and your team members will be more likely to take a professional attitude towards testing.
There is a big difference between convincing and requiring.
If you find a way to convince your colleagues to write them - great. However if you create some formalized rules and require them to write unit tests, they will find a way to overcome this. As a result you will get a bunch of unit tests which are worth nothing: There will be unit test for every single class available and they will test setters and getters.
Think twice before creating and enforcing rules. Developers are good at overcoming them.
First time around you just need to go ahead and write them and show people that it's worth it. I've found on three projects that it's the only way to convince people. Some people who don't code (e.g. junior project managers) won't be able to see the value until it's staring them right in the face.
On my software team, we tend to write a small business case on these issues and present them to management in order to have the time available to create and track tests. We explain that the time taken to test is well made up for when crunch time comes and everything is on the line. We also set up a Hudson build server to centralize the tracking of the unit tests. This makes it a lot easier for the developers to keep track of failing tests and to discover recurring problems.
Remind your team or the other developers that they're professionals, not amateurs. Worked for me!
Also, it's an industry standard these days. Without unit testing experience, they are less desirable and less valuable as employees to potential future employers.
As a team lead, it is my responsibility to ensure that my programmers are doing unit testing on all the modules they work on. I suppose at this point, it's not even a question of how to convince them, it's required. Not sometimes, not on largish projects, all the time. Unit testing is the first line of defense against putting something in production that you will have to maintain. If something is put into production that has not been completely unit and system tested, then it will come back to bite you. I guess one of the policies we have here to support this is that if it blows in production, or causes problems, then the programmer responsible for coding and testing that module will be the one that has to take care of the problems, do the cleanup, etc. That alone is a fairly good motivator.
The other is that it is about pride. I work in a shop of about 75 coders, although that is large by some standards, it's really small enough for all of us to know one another. Its also small enough that we know what one another is working on, and when it does move to production, we are aware of any abends, failures, etc. If you are careful, do the unit and system testing, the chances of moving something to production without causing failures increases significantly. It may take a time or two of moving something to production and failing to realize it, but there are great rewards involved in not messing up. It's really nice to hear congratulations in the hallway when you move a project in and it doesn't screw up.
Write a bunch of them and demonstrate that unit testing has improved your productivity and the quality of your code. Without some kind of proof, sometimes people won't believe it's worth it.
So, two years after I asked this question, I find that one unexpected answer was that by moving to a new SDLC was what was needed. Five years ago, we established our first formal SDLC. It improved our situation, but left out some important things, such as automation. We are now in the process of establishing a new SDLC (under new managment) where one of the tenants is automation. Not just automated unit tests, but automated functional tests.
I guess the lesson is that I was thinking too small. If you are going to change how you create software, go 'whole hog' and make a drastic change rather than propose incremental change if you are not used to that.
You could take some inspiration from an initiative at Google. Their test team started putting up examples, tips and benefits inside the toilet cubicles to raise the profile of the merits of test automation.
https://testing.googleblog.com/2007/01/introducing-testing-on-toilet.html