Unit Test, Test Driven Development [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a debate with my colleague regarding unit test and test driven development. The topic is below:
1) Writing a unit test before you write functional code does not constitute a Test Driven Development approach
I think Writing a unit test does constitute a test driven development, it is part of TDD.
2) a suite of unit tests is simply a by product of TDD.
A suite of unit test is NOT a by product of TDD.
What do you say?

1) Writing tests before you write functional code is necessary to do TDD, but does not by itself constitute TDD, at least according to the classic definition, where the important point is that making the tests pass is what drives the design (rather than some formal design document).
2) Again, the classic view says that the important point is that the design evolves from the tests, forcing it to be very modular. It is (or was) a novel concept that tests could (and should) influence the design, and perhaps often rejected or overlooked so that TDD proponents started to feel it needs to be stressed. But saying that the tests themselves are "just a byproduct" is IMO a counterproductive exaggeration.

Writing unit tests prior to writing functional code is the whole point of TDD. So the suite of unit tests is not the by-product, it is the central tenent.
Wikipedia's article
Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally refactors the new code to acceptable standards

TDD is all about Red - Green - Refactor.
When you have code before test there is no Red and that is bad because you may have errors in your tests or test things other then you think you are testing and get Green from the start. You should be Red and then go to Green only after you add code that is tested by that test.
Red-Green-Refactor http://reddevnews.com/~/media/ECG/visualstudiomagazine/Images/2007/11/listingsID_148_0711_rdn_tb%20gif.ashx

It really depends on what your tests do.
As far as I'm concerned TDD means that code classes, properties and methods are created due to the test being written for them first. In fact some development tools allow you to create code stubs directly from the test screens.
Writing a unit test for a method in a class that you've already created isn't TDD. Writing code that so that your test cases pass is.
TDD will give you far greater test coverage than standard unit testing. It will also focus your thoughts on what is wanted of the code, and although it may appear to take longer to produce a product by definition, it's more or less fully tested when built.
You will always end up with a suite of unit tests at the end.
However, a pragmatic approach must be taken as to how far you go with this as some areas are notoriously difficult to produce in a TDD environment e.g. WPF MVVM style views or web pages with javascript.

Related

How we have unit testing philosophy? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Hi stackoverflow family.
It is doubtless that unit testing is of great importance in software development. But i think it is practice and philosophy which is test first. And majority of developers want to use this philosophy, but they can't perform it on their project because they aren't used to Test Driven Development. Now my question is for those who follow this philosophy. What are the properties of the good test according to your experiences? And how you enable it to be a part of your lives.
Good days.
The way of Testivus brings enlightment on unit testing.
If you write code, write tests.
Don’t get stuck on unit testing dogma.
Embrace unit testing karma.
Think of code and test as one.
The test is more important than the unit.
The best time to test is when the code is fresh.
Tests not run waste away.
An imperfect test today is better than a perfect test someday.
An ugly test is better than no test.
Sometimes, the test justifies the means.
Only fools use no tools.
Good tests fail.
Some characteristics of a good test:
its execution doesn't depend on context (or state) - i.e. whether it's run in isolation or together with other tests;
it tests exactly one functional unit;
it covers all possible scenarios of the tested functional unit.
The discussion cannot be better phrased.
http://discuss.joelonsoftware.com/default.asp?joel.3.732806.3
http://discuss.joelonsoftware.com/default.asp?joel.3.39296.27
As per the idea of good test, it is one which catches a defect :).But TDD is more than defect catching, it is more about development and continuity.
I always think the rules and philosophy of TDD are best summed up in this article by Robert C. Martin:
The Three Rules of TDD
In it, he summarises TDD with the following three rules:
You are not allowed to write any production code unless it is to make a
failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient
to fail; and compilation failures are
failures.
You are not allowed to write any more production code than is
sufficient to pass the one failing
unit test.
There is an implied fourth rule:
You should refactor your code while the tests are passing.
While there are many more detailed examples, articles and books, I think these rules sum up TDD nicely.

least 'worth it' unit test you've ever written? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
On the SO blog and podcast Joel and Jeff have been discussing the, often ignored, times when unit testing a particular feature simply isn't worth the effort. Those times when unit testing a simple feature is so complicated, unpredictable, or impractical that the cost of the test doesn't reflect the value of the feature. In Joel's case, the example called for a complicated image comparison to simply determine compression quality if they had decided to write the test.
What are some cases you've run into where this was the case? Common areas I can think of are GUIs, page layout, audio testing (testing to make sure an audible warning sounded, for example), etc.
I'm looking for horror stories and actual real-world examples, not guesses (like I just did). Bonus points if you, or whoever had to write said 'impossible' test, went ahead and wrote it anyways.
#Test
public void testSetName() {
UnderTest u = new UnderTest();
u.setName("Hans");
assertEquals("Hans", u.getName());
}
Testing set/get methods is just stupid, you don't need that. If you're forced to do this, your architecture has some serious flaws.
Foo foo = new Foo();
Assert.IsNotNull(foo);
My company writes unit tests and integration tests seperately. If we write an Integration test for, say, a Data Access class, it gets fully tested.
They see Unit Tests as the same thing as an Integration test, except it can't go off-box (i.e. make calls to databases or webservices). Yet we also have Unit Tests as well as Integration Tests for the Data Access classes.
What good is a test against a data access class that can't connect to the data?
It sounds to me like the writing of a useless unit test is not the fault of unit tests, but of the programmer who decided to write the test.
As mentioned in the podcast (I believe (or somewhere else)) if a unit test is obscenely hard to create then it's possible that the code could stand to be refactored, even if it currently "works".
Even the "stupid" unit tests are necessary sometimes, even in the case of "get/set Name". When dealing which clients with complicated business rules, some of the most straightforward properties can have ridiculous caveats attached, and you mind find that some incredibly basic functions might break.
Taking the time to write a complicated unit test means that you've taken the time to fine-tune your understanding of the code, and you might fix bugs in doing so, even if you never complete the unit test itself.
Once I wrote a unit test to expose a concurrency bug, in response to a challenge on C2 Wiki.
It turned out to be unreasonably hard, and hinted that guaranteeing correctness of concurrent code is better handled at a more fundamental level.

Do you write your unit tests before or after coding a piece of functionality? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was wondering when most people wrote their unit tests, if at all. I usually write tests after writing my initial code to make sure it works like its supposed to. I then fix what is broken.
I have been pretty successful with this method but have been wondering if maybe switching to writing the test first would be advantageous?
whenever possible i try to follow a pure TDD approach:
write the unit tests for the feature being developed; this forces me to decide on the public interface(s)
code the feature ASAP (as simple as possible, but not simpler)
correct/refactor/retest
additional tests if required for better coverage, exceptional paths, etc. [rare but worth consideration]
repeat with next feature
it is easy to get excited and start coding the feature first, but this often means that you will not think through all of the public interfaces in advance.
EDIT: note that if you write the code first, it is easy to unintentionally write the test to conform to the code, instead of the other way 'round!
I really want to write the code first, and often do. But the more I do real TDD, where I refuse to write any code with out a test the more I find I write more testable code and better code.
So, yes, write the test first. It takes willpower and determination, but it really produces better results.
As an added bonus, TDD has really helped me keep focused in an environment with distractions.
I follow a TDD approach, but I'm not as much of a purist as some. Typically, I will rough in the class/method with a stub that simply throws a NotImplementedException. Then I will start writing the test(s). One feature at a time, one test at a time. Frequently, I'll find that I've missed a test -- perhaps when writing other tests or when I find a bug -- then I'll go back and write a test (and the bug fix, if necessary).
Using TDD helps keep your code in line with YAGNI (you aren't gonna need it) as long as you only write tests for features that you need to develop AND only write the simplest code that will satisfy your tests.
We try and write them before hand, but I will fully admit, that our environment is chaotic at times, and sometimes this step is initially passed over and written later.
I usually write a list of the unit tests I will be writing before I write the code, and actually code the unit tests after, but that's because I use a program to generate the unit test stubs and then modify them to test the appropriate cases.
I tend to use unit tests to test the code as I'm writing it. That way I'm using what I'm writing at the same time, as I use the unit test to test code while I'm writing it. Much like a console app would.
Sometimes I've even written tests without asserts, just debug outputs, just to test that how I use something is working without exceptions.
So my answer is, after a little bit of coding.
I use Test-Driven Development (TDD) to produce most of my production code, so I write my unit tests first unless I'm working on untestable legacy code (when the bar to write the tests is too high).
Otherwise, I write functional-level acceptance tests that the code shall satisfice.
Writing the unit tests first allow me to know exactly "where I am" : I know that what has been coded so far is operational and can be integrated or sent to the test team ; it's not bug free but it works.
I'll offer that I'm fairly new to writing unit tests, and that I wish I had written them before I wrote my code. Or, at least had a better understanding of how to write more testable code as well as a better understanding of concepts like dependency injection which seems to be critical to writing testable code.
Normally I write unit tests before the code. They are very beneficial and writing them before the code just makes sense. Dependency injection lends itself well to unit testing. Dependency injection allows for parts of your code to be mocked out so you only test a particular unit i.e. your code is loosely coupled and it’s therefore easy to swap out for a different (in this case mocked) implementation.

Do you separate your unit tests from your integration tests? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was just wondering if anyone else just sees integration tests as just a special unit test. I have, however, heard from other programmers that it is a good idea to separate your unit tests and integration test. I was wondering if someone could explain why this is a good idea. What sorts of advantages would there be to treating integration and unit tests as completely different? For example, I have seen separate folders and packages for integration tests and unit tests. I would be of the opinion that a single test package containing both unit tests and integration tests would be sufficient as they both are basically the same concept.
I see them as different for the following reason.
Unit tests can be performed on a single class/module in the developer's environment.
Integration tests should be performed in an environment that resembles the actual production setup.
Unit tests are kept deliberately "lightweight" so that the developer can run them as often as needed with minimal cost.
Speed is the primary reason. You want your unit tests to be as fast as possible so that you can run them as often as possible. You should still run your integration tests, but running them once before a check-in should be enough IMO. The unit test suite should be run much more often - ideally with every refactoring.
I work in an environment where we have about 15k junit tests with unit and integration tests completely mixed. The full suite takes about a half hour to run. Developers avoid running it and find mistakes later than they should. Sometimes they check in after running only a subset of the tests and include a bug which breaks the continuous build.
Start separating your tests early. It's very hard to once you have a large suite.
Yep. Typically unit tests are scoped at class level, so they exist in an environment together with mock objects. Integration tests, on the other hand, do all the tricks by keeping references to your real assembly types.
I just don't see how one would organize both unit and integration into a single project.
if you limit the notion of 'unit test' to scope at the class level, then yes, keep them separate
however, if you define the smallest relevant testable unit as a feature then some of your 'unit' tests will technically be 'integration' tests
the rehashing of the various definitions/interpretations of the terms is largely irrelevant though, partitioning of your test suites should be a function of the scope of the components being tested and the time required to perform the test.
For example, if all of your tests (unit, integration, regression, or whatever) apply against a single assembly and run in a few seconds, then keep them all together. But if some of your tests require six clean installation machines on a subnet while others do not, it makes sense to separate the first set of tests from the latter
summary: the distinction of 'unit' and 'integration' tests is irrelevant; package test suites based on operational scope

Tricks for writing better unit tests [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are some of the tricks or tools or policies (besides having a unit testing standard) that you guys are using to write better unit tests? By better I mean 'covers as much of your code in as few tests as possible'. I'm talking about stuff that you have used and saw your unit tests improve by leaps and bounds.
As an example I was trying out Pex the other day and I thought it was really really good. There were tests I was missing out and Pex easily showed me where. Unfortunately it has a rather restrictive license.
So what are some of the other great stuff you guys are using/doing?
EDIT: Lots of good answers. I'll be marking as correct the answer that I'm currently not practicing but will definitely try and that hopefully gives the best gains. Thanks to all.
Write many tests per method.
Test the smallest thing possible. Then test the next smallest thing.
Test all reasonable input and output ranges. IOW: If your method returns boolean, make sure to test the false and true returns. For int? -1,0,1,n,n+1 (proof by mathematical induction). Don't forget to check for all Exceptions (assuming Java).
4a. Write an abstract interface first.
4b. Write your tests second.
4c. Write your implementation last.
Use Dependency Injection. (for Java: Guice - supposedly better, Spring - probably good enough)
Mock your "Unit's" collaborators with a good toolkit like mockito (assuming Java, again).
Google much.
Keep banging away at it. (It took me 2 years - without much help but for google - to start "getting it".)
Read a good book about the topic.
Rinse, repeat...
Write tests before you write the code (ie: Test Driven Development). If for some reason you are unable to write tests before, write them as you write the code. Make sure that all the tests fail initially. Then, go down the list and fix each broken one in sequence. This approach will lead to better code and better tests.
If you have time on your side, then you may even consider writing the tests, forgetting about it for a week, and then writing the actual code. This way you have taken a step away from the problem and can see the problem more clearly now. Our brains process tasks differently if they come from external or internal sources and this break makes it an external source.
And after that, don't worry about it too much. Unit tests offer you a sanity check and stable ground to stand on -- that's all.
On my current project we use a little generation tool to produce skeleton unit tests for various entities and accessors, it provides a fairly consistent approach for each modular unit of work which needs to be tested, and creates a great place for developers to test out their implementations from (i.e the unit test class is added when the rest of the entities and other dependencies are added by default).
The structure of the (templated) tests follows a fairly predictable syntax, and the template allows for implementation of module/object-specific buildup/tear down (we also use a base class for all the tests to encapsule some logic).
We also create instances of entities (and assign test data values) in static functions so that objects can be created programatically and used within different test scenarios and across test classes, whcih is proving to be very helpful.
Read a book like The Art of Unit Testing will definitely help.
As far as policy goes read Kent Beck's answer on SO, particularly:
to test as little as possible to reach a given level of confidence
Write pragmatic unit tests for tricky parts of your code and don't lose site of the fact that it's the program you are testing that's important not the unit tests.
I have a ruby script that generates test stubs for "brown" code that wasnt built with TDD. It writes my build script, sets up includes/usings and writes a setup/teardown to instantiate the test class in the stub. Helps to start with a consistent starting point without all the typing tedium when I hack at code written in the Dark Times.
One practice I've found very helpful is the idea of making your test suite isomorphic to the code being tested. That means that the tests are arranged in the same order as the lines of code they are testing. This makes it very easy to take a piece of code and the test suite for that code, look at them side-by-side and step through each line of code to verify there is an appropriate test. I have also found that the mere act of enforcing isomorphism like this forces me to think carefully about the code being tested, such as ensuring that all the possible branches in the code are exercised by tests, or that all the loop conditions are tested.
For example, given code like this:
void MyClass::UpdateCacheInfo(
CacheInfo *info)
{
if (mCacheInfo == info) {
return;
}
info->incrRefCount();
mCacheInfo->decrRefCount();
mCacheInfo = info
}
The test suite for this function would have the following tests, in order:
test UpdateCacheInfo_identical_info
test UpdateCacheInfo_increment_new_info_ref_count
test UpdateCacheInfo_decrement_old_info_ref_count
test UpdateCacheInfo_update_mCacheInfo