TDD approach for developed source code - unit-testing

I have been given a source code. I have to implement TDD approach to it.
How ever the general approach for TDD is:
Write the test.
Fail it.
Write the code.
Pass the test.
Refactor, if necessary.
How should I add tests to such an existing source code?
Suggestions for an ad-hoc approach for such a case, is welcome.
Thanks

Don't forget step 0: understand the requirements. What you can do is to discover the requirements, then write the test that demonstrates whether the requirement is satisfied. If it will pass, then great. If it doesn't, you've found a bug. Either way, you've added a regression test.
What you can't do is implement TDD (or any development practice) for code that's already been written: that boat has sailed. What you can do is enable future development on the codebase to benefit from test-driven development practices.

You could treat that code base as third party code and write some learning tests for it. Learning tests will allow you to discover the code while building yourself a nice test suite for further developments should you have to modify the code later.
If the legacy code is small enough, you could do that until code coverage gets near a comfortable 90 or 100%.

Related

Convert project without introducing bugs

I have the C++ code of a exe which contains a UI and some process.
My goal is to separate the UI from the process and to convert the exe into a dll.
In order to do that, I am thinking of generating unit test before touching any code and then to do my modification and make sure the tests are not failing.
The problem is that I am not sure if this is the best approach and if it is, is there a way to automatically generate unit test.
BTW, I am using VS 2012.
Do you have any guidance for me?
It's relatively hard to write meaningful unit tests for GUIs. There are frameworks like FrogLogic's Squish that make GUI testing relatively easy, but most often, these tools are not free.
Note also that it is no small feat to write unit tests "after the fact", as the original code might already have to be changed to make it testable.
As far as I know, there are no tools for automatically bringing existing code under unit tests - if it were that easy, there should be no new bugs at all, right? As arne says in his answer, if code was not designed to be tested in the first place, it usually has to be changed to be testable.
The best you can do in my opinion is to learn some techniques of how to introduce unit tests with relatively few changes (so that you can introduce the unit tests before you start the "real" modifications); one book on this subject I've read recently is Michael Feathers' "Working Effectively with Legacy Code" (Amazon Link: http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052). Although it has some shortcomings, it has pretty detailed descriptions of techniques how you can easily introduce unit tests.

TDD vs. Unit testing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s).
There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits?
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written?
EDIT
What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input.
FOLLOW UP
We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!
If the team is floundering at implementing TDD, but they weren't creating any Unit Tests before...then start them off by creating Unit Tests after their code is written. Even Unit tests written after the code are better than no Unit Tests at all!
Once they're proficient at Unit Testing (and everything that comes with it), then you can work on getting them to create the tests first...and code second.
It is absolutely still worth writing the unit tests after code is written. It's just that sometimes it's often harder because your code wasn't designed to be testable, and you may have overcomplicated it.
I think a good pragmatic way to bring a team into TDD is to provide the alternative method of "test-during development" in the transition period, or possibly in the long-term. They should be encouraged to TDD sections of code that seem natural to them. However, in sections of code that seem difficult to approach test-first or when using objects that are predetermined by a non-agile A&D process, developers can be given the option of writing a small section of the code, then writing tests to cover that code, and repeating this process. Writing unit tests for some code immediately after writing that code is better than not writing any unit tests at all.
It's in my humble opinion better to have 50% test coverage with "code first, test after" and a 100% completed library, than 100% test coverage and a 50% completed library with TDD. After a while, your fellow developers will hopefully find it entertaining and educational to write tests for all of the public code they write, so TDD will sneak its way into their development routine.
TDD is about design! So if you use it, you will be sure to have a testable design of your code, making it easier to write your tests. If you write tests after the code is written they are still valuable but IMHO you will be wasting time since you will probably not have a testable design.
One suggestion I can give to you to try to convince your team to adopt TDD is using some of the techniques described in Fearless Change: Patterns for Introducing New Ideas, by Mary Lynn Manns and Linda Rising.
I just read this on a calendar: "Every rule, executed to its utmost, becomes ridiculous or even dangerous." So my suggestion is not to be religious about it. Every member of your team must find a balance between what they feel "right" when it comes to testing. This way, every member of your team will be most productive (instead of, say, thinking "why do I have to write this sti**** test??").
So some tests are better than none, tests after the code are better than few tests and testing before the code is better than after. But each step has its own merits and you shouldn't frown upon even small steps.
If they're new to testing than IMO start off testing code that's already been written and slowly graduate to writing tests first. As someone trying to learn TDD and new to unit testing, I've found it kind of hard to do a complete 180 and change my mindset to write tests before code, so the approach I'm taking is sort of a 50-50 mix; when I know exactly what the code will look like, I'll write the code and then write a test to verify it. For situations where I'm not entirely sure then I'll start with a test and work my way backwards.
Also remember that there is nothing wrong with writing tests to verify code, instead of writing code to satisfy tests. If your team doesn't want to go the TDD route then don't force it on them.
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
There is absolutely no doubt about the fact that there is value in unit tested code (regardless of when tests were written) and I include "the code is unit tested" in the "Definition of Done". People may use TDD or not, as long as they test.
Regarding version control, I like to use "development branches" with a unit tested policy (i.e. the code compiles and builds, all unit tests pass). When features are done, they are published from development branches to the trunk. In other words, the trunk branch is the "Done branch" (No junk on the trunk!) and has a shippable policy (can release at any time) which is more strict and includes more things than "unit tested".
This is something that your team will have to have its own successes with before they begin to believe it in. I'll rant about my nUnit epiphany for anyone who cares:
About 5 years ago I discovered nUnit when working on a project. We had almost completed V1.0 and I created a few tests just to try out this new tool. We had a lot of bugs (obviously!) because we were a new team, on a tight deadline, high expectations (sound familiar?) etc. Anyway we got 1.0 in and started on 1.1. We re-orged the team a little bit and I got 2 devs assigned to me. I did a 1-hour demo for them and told them that everything we wrote had to have a test case with it. We constantly ran "behind" the rest of the team during the 1.1 dev cycle because we were writing more code, the unit tests. We ended up working more but here's the payoff -- when we finally got into testing we had exactly 0 bugs in our code. We helped everyone else debug and repair their bugs. In the postmortem, when the bug counts showed up, it got EVERYONE's attention.
I'm not dumb enough to think you can test your way to success but I am a true believer when it comes to unit tests. The project adopted nUnit and it soon spread to the company for all .Net projects as a result of 1 success. Total time period for our V1.1 release was 9 dev weeks so it was definitely NOT an overnight success. But long term, it proved successful for our project and the company we built solutions for.
There is no doubt that testing (First, While or even After) will save your bacon, and improve your productivity and confidence. I recommend adopting it!
I was in a similar situation, because I was a "noob" developer, I was often frustrated when working on team project by the fact that a contribution had broken the build. I did not know if I was to blame or even in some cases, who to blame. But I was more concerned that I was doing to same thing to my fellow developers. This realisation then motivated to adopt some TDD strategies. Our team started have silly games, and rules, like you cannot go home till all your tests pass, or if you submit something without a test, then you have to buy everyone "beer/lunch/etc" and it made TDD more fun.
One of the most useful aspect of unit testing is ensuring the continuing correctness of already working code. When you can refactor at will, let an IDE remind you of compile time errors, and then click a button to let your tests spot any potential runtime errors--sometimes arriving in previously trivial blocks of code, then I think you will find your team starting to appreciate TDD. So starting with testing existing code is definitely useful.
Also, to be blunt, I have learned more about how to write testable code by trying to test written code than from starting with TDD. It can be too abstract at first if you are trying to think of contracts that will both accomplish the end goal and allow testing. But when you look at code and can say "This singleton here completely spoils dependency injection and makes testing this impossible," you start to develop an appreciation for what patterns make your testing life easier.
Well, if you do not write tests firsts it's not "Test Driven", it's just testing. It has benefits in itself and if you allready have a code base adding tests for it is certainly usefull even if it's not TDD but merely testing.
Writing tests first is about focusing on what the code should do before writing it. Yes you also get a test doing that and it's good, but some may argue it's not even the most important point.
What I would do is train the team on toy projects like these (see Coding Dojo, Katas) using TDD (if you can get experienced TDD programmers to participate in such workshop it would be even better). When they'll see the benefits they will use TDD for the real project. But meanwhile do not force them, it they do not see the benefit they won't do it right.
If you have design sessions before writing code or have to produce a design doc, then you could add Unit Tests as the tangible outcome of a session.
This could then serve as a specification as to how your code should work. Encourage pairing on the design session, to get people talking about how something should work and what it should do in given scenarios. What are the edge cases, with explicit test cases for them so everyone knows what it's going to do if given a null argument for example.
An aside but BDD also may be of interest
You may find some traction by showing an example or two where TDD results in less code being written - because you only write code required to make the test pass, the temptation to gold-plate or engage in YAGNI is easier to resist. Code you don't write doesn't need to be maintained, refactored, etc, so it's a "real savings" that can help sell the concept of TDD.
If you can clearly demonstrate the value in terms of time, cost, code and bugs saved, you may find it's an easier sell.
Starting to build JUnit test classes is the way to start, for existing code it's the only way to start. In my experience it is very usefull to create test classes for existing code. If management thinks this will invest too much time, you can propose to only write test classes when the corresponding class is found to contain a bug, or is in need of cleanup.
For the maintenance process the approach to get the team over the line would be to write JUnit tests to reproduce bugs before you fix them, i.e.
bug is reported
create JUnit test class if needed
add a test that reproduces the bug
fix your code
run the test to show the current code does not reproduce the bug
You can explain that by "documenting" bugs in this way will prevent those bugs from creeping back in later. That is a benefit the team can experience immediately.
I have done this in many organizations and I have found the single best way to get TDD started and followed is to set up pair programming. If you have someone else you can count on that knows TDD then the two of you can split up and pair with other developers to actually do some paired programming using TDD. If not I would train someone who will help you to do this before presenting it to the rest of the team.
One of the major hurdles with unit testing and especially TDD is that developers don't know how to do it, so they can not see how it can be worth their time. Also when you first start out, it is much slower and doesn't seem to provide benefits. It is only really providing you benefits when you are good at it. By setting up paired programming sessions you can quickly get developers to be able to learn it quickly and get good at it quicker. Additionally they will be able to see immediate benefits from it as you work though it together.
This approach has worked many times for me in the past.
One powerful way to discover the benefits of TDD is to do a significant rewrite of some existing functionality, perhaps for performance reasons. By creating a suite of tests that do a good job covering all the functionality of the existing code, this then gives you the confidence to refactor to your heart's content with full confidence that your changes are safe.
Note that in this case I'm talking about testing the design or contract - unit tests that test implementation details will not be suitable here. But then again, TDD can't test implementation by definition, as they are supposed to be written before the implementation.
TDD is a tool that developers can use to produce better code. I happen to feel that the exercise of writing testable code is as least as valuable as the tests themselves. Isolating the IUT (Implementation Under Test) for testing purposes has the side affect of decoupling your code.
TDD isn't for everyone, and there's no magic that will get a team to choose to do it. The risk is that unit test writers that don't know what's worth testing will write a lot of low value tests, which will be cannon fodder for the TDD skeptics in your organization.
I usually make automated Acceptance Tests non-negotiable, but allow developers to adopt TDD as it suits them. I have my experienced TDDers train/mentor the rest and "prove" the usefulness by example over a period of many months.
This is as much a social/cultural change as it is a technical one.

How can I check that I didn't break anything when refactoring?

I'm about to embark on a bout of refactoring of some functions in my code. I have a nice amount of unit tests that will ensure I didn't break anything, but I'm not sure about the coverage they give me. Are there any tools that can analyze the code and see that the functionality remains the same?
I plan to refactor some rather isolated code, so I don't need to check the entire program, just the areas that I'm working on.
For context, the code I'm working on is in C/C++, and I work in Linux with GCC and VIM.
gcov will give you coverage information for your unit tests.
It's difficult to answer your question in an accurate manner without knowing more about the refactorings you plan to perform.
An advice one might give is to proceed with small iterations instead of refactoring lots and lots of parts of your code base and then realize everything breaks.
Reference: The GNU Coverage Tool - A Brief Tutorial
There's no "easy" way to ensure that functionality hasn't been changed. You have to have complete unit tests that cover all possibilities. It's impossible to test absolutely everything but you can make sure that your most important user cases have thorough tests.
You can also use a coverage tool to ensure you have good test coverage:
http://covtool.sourceforge.net/
If you have unit tests but you're not happy that they cover the areas you're going to be refactoring, you can find out using code coverage analysis tools. If you find gaps, you can create and double-check tests to fill the gaps, then proceed with your refactoring fairly happy that your (updated) unit tests cover the ground thoroughly -- which is good for the project in the long term as well.
The trick is to use unit tests. Basically, the compiler checks the linguistical correctness of your creation while your unit tests verify it from a functional standpoint. By having a large set of good unit tests you can feel safe when refactoring (especially when working in a multi-developer project)
Im not sure about your specific platform of choice, but have you looked at code coverage tools, such as Bullseye. It wont provide you an analysis of the overall functionality (and if it deviates) but it will help you ensure that your tests are adequately exercising your target libraries. Its a commercial applications, but I know there are similar OSS versions for other languages, there might be free licenses if you need one.

TDD. test first anyway?

Are you doing test first anyway? Or in some cases you are doing some coding and then writing your tests to make sure code works? As for me I prefer to create a class. Sure, during class creation I think about its interface and how to test the class. But I dont write testing code first. Do you write it first? Do you think you should always write test code first?
I'm not a purist in this matter (TDD involves more than just writing the tests first, it's also about initially writing very minimal, "hard coded" tests and refactoring them a lot -- see The Book by The Master himself).
I tend to test-first when I'm doing incremental development to add a feature to an existing module, and I insist on test-first when the incremental development I'm doing is to fix a bug (in the latter case I absolutely want a unit-test AND an integration-test that both reproduce the bug, before I fix the code that caused the bug).
I tend to be laxer when I'm doing "greenfield" development, especially if that's of an exploratory, "let's see what we can do here that's useful", nature -- which does happen, e.g. in data mining and the like -- you have a somewhat vague idea that there might be a useful signal buried in the data, some hypothesis about its possible nature and smart ways to [maybe] extract it -- the tests won't help until the exploration has progressed quite a bit.
And, once I start feeling happy with what I've got, and thus start writing tests, I don't necessarily have to redo the "exploratory" code from scratch (as I keep it clean and usable as I go, not too hard to do especially in Python, but also in R and other flexible languages).
Test-driven development, by definition, is writing your tests first. If you create your class first, the subsequent tests you write can be called Unit Tests, but it is not TDD.
There are many who say that writing your tests first improves code quality. I am inclined to agree, provided there is some effort put into the software design beyond just writing tests and making them pass.
If you are refactoring an existing legacy system, it is a good idea to wrap the functionality of that system in a suite of tests prior to refactoring. That way, you know if your code changes break something.
In my humble opinion, Test Driven Development is something more than writing unit tests first. Test first TDD is more about putting yourself in the mindset of thinking about what exactly it is you are trying to achieve with the code you are about to write.
What should your acceptance criteria be?
When will your code be 'done' and how is done defined?
Writing unit tests before code is only one of the ways to formalize such acceptance criteria. The biggest benefit in my view of Test first TDD is that you give a good hard think and document (in unit tests, on paper, on the white board) what the acceptance criteria are for the feature/story you are implementing. Having such documentation also helps define the scope of done.
So, whether you code a function first and then write the unit test for it or you decide to first write the test and then code the function is of secondary nature. Once you've thought out and documented your acceptance criteria, your benefit will be that your target is clearer and you're more likely to focus on fulfilling the acceptance criteria minimizing any 'noise' (e.g. feature x would be nice to add, am I 'done' yet).
Of course this does not mean that we go ahead and write code leaving it untested and attempt to retrofit unit tests when we've already coded 4 classes for example! I just think that there is no need to be 100% dogmatic in writing unit tests before actual code as the benefit in TDD lies elsewhere, but in any case our unit tests should keep up with our growing code base.
Lets be clear here, that TDD was designed for production code. If you want to follow the rules for TDD for production code, then the first thing you write is your first test.
I write my first test (before writing a line of production code), which may want to instantiate a new class and running it produces some sort of error, which I fix by writing the smallest amount of production code to get the test running.
If you want to do something else, then you can make up your own rules: You can write whatever code and/or tests you like to explore any designs or patterns or anything.
However, if you then want to write production code based on what you have learned from your experiments, you may well be better placed - but technically you should really throw away your experiments - to write your first test.

Starting UnitTesting on a LARGE project

Can anyone recommend some best-practices on how to tackle the problem of starting to UnitTest a large existing CodeBase?
The problems that I'm currently facing include:
HUGE code base
ZERO existing UnitTests
High coupling between classes
Complex OM (not much I can do here - it's a complex business domain)
Lack of experience in writing UnitTests/TDD
Database dependency
External sources dependencies (Web services, WCF services, NetBIOS, etc)
Obviously, I understand that I should start with refactoring the code, to make it less coupled, and more testable. However, doing such refactoring is risky without UnitTests (Chicken and Egg, anyone?).
On a side note, would you recommend starting the refactoring and writing test on the Domain classes, or on tier classes (logging, utilities, etc)?
First, I second the "Working Effectively with Legacy Code" recommendation Jeffrey Frederick gave.
As you stated in the question, you cannot change your code because you currently have no unit tests available to detect regressions, and you cannot add unit tests because your codebase is not unit-testable. In this situation, you can create characterization tests : end-to-end automatic tests that would help you detecting changes in the external behavior of your software. Once those are in place, you can start to slowly modify the code.
However, putting your HUGE code base under test is an enormous effort, highly risked, and you'll probably burn out the team doing this as they will have to produce strong efforts with low reward in terms of test coverage. Putting the entire code base under test is an endless effort.
Instead, develop new capability out of the code base, so that it won't hinder you. Once the new code is fully tested, integrate it in the code base.
Also try to create unit-tests every single time you fix a problem in the code base. It will be hard the first times, but it will get easier once you'll have some unit testing environment ready to be setup.
Lack of experience in writing
UnitTests/TDD
I think this is the most significant.
Standard advice is to start writing unit tests for all your new code first so that you learn how to unit test before you attempt to unit test your existing code. I think this is good advice in theory but hard to follow in practice, since most new work is modifications of existing systems.
I think you'd be well served by having access to a "player coach", someone who could work on the project with your team and teach the skills in the process of applying them.
And I think I'm legally obligated to tell you to get Michael Feather's Working Effectively with Legacy Code.
There good advices in book Manage it! by Johanna Rothman.
I could also recoment the following:
Use unit test on newly created source code fregment
Create unit test for bugs
Create unit test for the riskiest part of the application
Create unit test for the most valuable part of the application.
If the coupling is too high create test which are more module or integration tests but automated.
One single unit test will not help. But it is needed to start. After while there will be handfull of unit test on the most riskiest part of the application and that will help preventing bugs in those segments. When you get to this point most of the developers will realize how usefull unit tests are.
You won't get this thing right on your own.
There's a lot of persuasion needed. So I want to give you this thread, where lots of information and hints for good argumentation are given: How do you persuade others to write unit-tests?
Only the whole team can create such a big size of unit-tests.
Gross. I've had to deal with this too. The best thing to do, I think, is to lean heavily on yucky tools that you probably don't want to use (like NUnitAsp) at first, to get the existing code under tests. Then you can start refactoring, while those tools keep your codebase from falling apart from under you.
Then, as you refactor, write more logical unit tests on the new, testable pieces that you're creating and gradually retire the original tests.
Good luck with this...
Since you have little experience with writing unit tests I recommend that you first try to gain at least some experience. Otherwise, it's likely that you'll abondon TDD/unit testing and maybe introduce new bugs into the system you're trying to unit test.
The best way to gain experience is to find someone experienced who can assist you.