New to unit testing, how to write great tests? [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm fairly new to the unit testing world, and I just decided to add test coverage for my existing app this week.
This is a huge task, mostly because of the number of classes to test but also because writing tests is all new to me.
I've already written tests for a bunch of classes, but now I'm wondering if I'm doing it right.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
This is just a feeling, and as said earlier, I have no experience of testing. If some more experienced testers out there could give me advices on how to write great tests for an existing app, that would be greatly appreciated.
Edit : I would love to thank Stack Overflow, I had great inputs in less that 15 minutes that answered more of the hours of online reading I just did.

My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
I think you are doing it wrong.
A unit test should:
test one method
provide some specific arguments to that method
test that the result is as expected
It should not look inside the method to see what it is doing, so changing the internals should not cause the test to fail. You should not directly test that private methods are being called. If you are interested in finding out whether your private code is being tested then use a code coverage tool. But don't get obsessed by this: 100% coverage is not a requirement.
If your method calls public methods in other classes, and these calls are guaranteed by your interface, then you can test that these calls are being made by using a mocking framework.
You should not use the method itself (or any of the internal code it uses) to generate the expected result dynamically. The expected result should be hard-coded into your test case so that it does not change when the implementation changes. Here's a simplified example of what a unit test should do:
testAdd()
{
int x = 5;
int y = -2;
int expectedResult = 3;
Calculator calculator = new Calculator();
int actualResult = calculator.Add(x, y);
Assert.AreEqual(expectedResult, actualResult);
}
Note that how the result is calculated is not checked - only that the result is correct. Keep adding more and more simple test cases like the above until you have have covered as many scenarios as possible. Use your code coverage tool to see if you have missed any interesting paths.

For unit testing, I found both Test Driven (tests first, code second) and code first, test second to be extremely useful.
Instead of writing code, then writing test. Write code then look at what you THINK the code should be doing. Think about all the intended uses of it and then write a test for each. I find writing tests to be faster but more involved than the coding itself. The tests should test the intention. Also thinking about the intentions you wind up finding corner cases in the test writing phase. And of course while writing tests you might find one of the few uses causes a bug (something I often find, and I am very glad this bug did not corrupt data and go unchecked).
Yet testing is almost like coding twice. In fact I had applications where there was more test code (quantity) than application code. One example was a very complex state machine. I had to make sure that after adding more logic to it, the entire thing always worked on all previous use cases. And since those cases were quite hard to follow by looking at the code, I wound up having such a good test suite for this machine that I was confident that it would not break even after making changes, and the tests saved my ass a few times. And as users or testers were finding bugs with the flow or corner cases unaccounted for, guess what, added to tests and never happened again. This really gave users confidence in my work in addition to making the whole thing super stable. And when it had to be re-written for performance reasons, guess what, it worked as expected on all inputs thanks to the tests.
All the simple examples like function square(number) is great and all, and are probably bad candidates to spend lots of time testing. The ones that do important business logic, thats where the testing is important. Test the requirements. Don't just test the plumbing. If the requirements change then guess what, the tests must too.
Testing should not be literally testing that function foo invoked function bar 3 times. That is wrong. Check if the result and side-effects are correct, not the inner mechanics.

It's worth noting that retro-fitting unit tests into existing code is far more difficult than driving the creation of that code with tests in the first place. That's one of the big questions in dealing with legacy applications... how to unit test? This has been asked many times before (so you may be closed as a dupe question), and people usually end up here:
Moving existing code to Test Driven Development
I second the accepted answer's book recommendation, but beyond that there's more information linked in the answers there.

Don't write tests to get full coverage of your code. Write tests that guarantee your requirements. You may discover codepaths that are unnecessary. Conversely, if they are necessary, they are there to fulfill some kind of requirement; find it what it is and test the requirement (not the path).
Keep your tests small: one test per requirement.
Later, when you need to make a change (or write new code), try writing one test first. Just one. Then you'll have taken the first step in test-driven development.

Unit testing is about the output you get from a function/method/application.
It does not matter at all how the result is produced, it just matters that it is correct.
Therefore, your approach of counting calls to inner methods and such is wrong.
What I tend to do is sit down and write what a method should return given certain input values or a certain environment, then write a test which compares the actual value returned with what I came up with.

Try writing a Unit Test before writing the method it is going to test.
That will definitely force you to think a little differently about how things are being done. You'll have no idea how the method is going to work, just what it is supposed to do.
You should always be testing the results of the method, not how the method gets those results.

tests are supposed to improve maintainability. If you change a method and a test breaks that can be a good thing. On the other hand, if you look at your method as a black box then it shouldn't matter what is inside the method. The fact is you need to mock things for some tests, and in those cases you really can't treat the method as a black box. The only thing you can do is to write an integration test -- you load up a fully instantiated instance of the service under test and have it do its thing like it would running in your app. Then you can treat it as a black box.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I
already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some
inner methods to be called a number of times, with certain arguments), that it seems that
if I ever refactor the method, the tests will fail even if the final behavior of the
method did not change.
This is because you are writing your tests after you wrote your code. If you did it the other way around (wrote the tests first) it wouldnt feel this way.

Related

How do I refactor unit tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This has been driving me nuts lately...
What is refactoring?
Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.
And how do we make sure we don't break anything during refactoring?
Before refactoring a section of code, a solid set of automatic unit tests is needed. The tests are used to demonstrate that the behavior of the module is correct before the refactoring.
Okay fine. But how do I proceed if I find a code smell in the unit tests themselves? Say, a test method that does too much? How do I make sure I don't break anything while refactoring the unit tests?
Do I need some kind of meta-tests? Is it unit tests all the way down?
Or do unit tests simply not obey the normal rules of refactoring?
In my experience, there are two reasons to trust tests:
Review
You've seen it fail
Both of these are activities that happen when a test is written. If you keep tests immutable, you can keep trusting them.
Every time you modify a test, it becomes less trustworthy.
You can somewhat alleviate that problem by repeating the above process: review the changes to the tests, and temporarily change the System Under Test (SUT) so that you can see the tests fail as expected.
When modifying tests, keep the SUT unchanged. Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
With respect this is an older post, it was referenced in a comment on my post about TDD in practice. So upon review, I'd like to throw in my two cents.
Mainly because I feel the accepted answer makes the slippery statement:
Every time you modify a test, it becomes less trustworthy.
I take issue with the word modify. In regards to refactoring such words like change, modify, etc are often avoided as they carry implications counter to refactoring.
If you modify a test in the traditional sense there is risk you introduced a change that made the test less trustworthy.
However, if you modify a test in the refactor sense then the test should be no less trustworthy.
This brings me back to the original question:
How do I refactor unit tests?
Quite simply, the same as you would any other code - in isolation.
So, if you want to refactor your tests, don't change the code, just change your tests.
Do I need test for my tests?
No. In fact, Kent Beck addresses this exact question in his Full Stack Radio interview, saying:
Your code is the test for your tests
Mark Seemann also notes this in his answer:
Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
In the end, this is not so much about how to refactor tests as much as it is refactoring in general. The same principles apply, namely refactoring restructures code without changing its external behavior. If you don't change the external behavior, then no trust is lost.
How do I make sure I don't break anything while refactoring the unit tests?
Keep the old tests as a reference.
To elaborate: unit tests with good coverage are worth their weight in results. You don't keep them for amazing program structure or lack of duplication; they're essentially a dataset of useful input/output pairs.
So when "refactoring" tests, it only really matters that the program tested with the new set shows the same behaviour. Every difference should be carefully, manually inspected, because new program bugs might have been found.
You might also accidentally reduce the coverage when refactoring. That's harder to find, and requires specialized coverage analysis tools.
you don't know you won't break anything. to avoid the problem of 'who will test our tests?' you should keep tests as simple as possible to reduce the possibility of making an error.
when you refactor tests you can always use automatic refactoring or other 'trusted' methods like method extraction etc.
you also often use existing testing frameworks. they are tested by their creators. so when you start to build your own (even simple one) framework, complex helper methods etc, you can always test it

Unit Testing : what to test / what not to test?

Since a few days ago I've started to feel interested in Unit Testing and TDD in C# and VS2010. I've read blog posts, watched youtube tutorials, and plenty more stuff that explains why TDD and Unit Testing are so good for your code, and how to do it.
But the biggest problem I find is, that I don't know what to check in my tests and what not to check.
I understand that I should check all the logical operations, problems with references and dependencies, but for example, should I create an unit test for a string formatting that's supossed to be user-input? Or is it just wasting my time while I just can check it in the actual code?
Is there any guide to clarify this problem?
In TDD every line of code must be justified by a failing test-case written before the code.
This means that you cannot develop any code without a test-case. If you have a line of code (condition, branch, assignment, expression, constant, etc.) that can be modified or deleted without causing any test to fail, it means this line of code is useless and should be deleted (or you have a missing test to support its existence).
That is a bit extreme, but this is how TDD works. That being said if you have a piece of code and you are wondering whether it should be tested or not, you are not doing TDD correctly. But if you have a string formatting routine or variable incrementation or whatever small piece of code out there, there must be a test case supporting it.
UPDATE (use-case suggested by Ed.):
Like for example, adding an object to a list and creating a test to see if it is really inside or there is a duplicate when the list shouldn't allow them.
Here is a counterexample, you would be surprised how hard it is to spot copy-paste errors and how common they are:
private Set<String> inclusions = new HashSet<String>();
private Set<String> exclusions = new HashSet<String>();
public void include(String item) {
inclusions.add(item);
}
public void exclude(String item) {
inclusions.add(item);
}
On the other hand testing include() and exclude() methods alone is an overkill because they do not represent any use-cases by themselves. However, they are probably part of some business use-case, you should test instead.
Obviously you shouldn't test whether x in x = 7 is really 7 after assignment. Also testing generated getters/setters is an overkill. But it is the easiest code that often breaks. All too often due to copy&paste errors or typos (especially in dynamic languages).
See also:
Mutation testing
Your first few TDD projects are going to probably result in worse design/redesign and take longer to complete as you are learning (at least in my experience). This is why you shouldn't jump into using TDD on a large critical project.
My advice is to use "pure" TDD (acceptance/unit test everything test-first) on a few small projects (100-10,000 LOC). Either do the side projects on your own or if you don't code in your free time, use TDD on small internal utility programs for your job.
After you do "pure" TDD on about 6-12 projects, you will start to understand how TDD affects design and learn how to design for testability. Once you know how to design for testability, you will need to TDD less and maximize the ROI of unit, regression, acceptance, etc. tests rather than test everything up front.
For me, TDD is more of teaching method for good code design than a practical methodology. However, I still TDD logic code and unit test instead of debug.
There is no simple answer to this question. There is the law of diminishing returns in action, so achieving perfect coverage is seldom worth it. Knowing what to test is a thing of experience, not rules. It’s best to consciously evaluate the process as you go. Did something break? Was it feasible to test? If not, is it possible to rewrite the code to make it more testable? Is it worth it to always test for such cases in the future?
If you split your code into models, views and controllers, you’ll find that most of the critical code is in the models, and those should be fairly testable. (That’s one of the main points of MVC.) If a piece of code is critical, I test it, even if it means that I would have to rewrite it to make it more testable. If a piece of code is easy to get wrong or get broken by future updates, it gets a test. I seldom test controllers and views, as it’s not proving worth the trouble for me.
The way I see it all of your code falls into one of three buckets:
Code that is easy to test: This includes your own deterministic public methods.
Code that is difficult to test: This includes GUI, non-deterministic methods, private methods, and methods with complex setup.
Code that you don't want to test: This includes 3rd party code, and code that is difficult to test and not worth the effort.
Of the three, you should focus on testing the easy code. The difficult to test code should be refactored so that into two parts: code that you don't want to test and easy code. And of course, you should test the refactored easy code.
I think you should only unit test entry points to behavior of the system. This include public methods, public accessors and public fields, but not constants (constant fields, enums, methods, etc.). It also includes any code which directly deals with IO, I explain why further below.
My reasoning is as follows:
Everything that's public is basically an entry point to a behavior of the system. A unit test should therefore be written that guarantees that the expected behavior of that entry point works as required. You shouldn't test all possible ways of calling the entry point, only the ones that you explicitly require. Your unit tests are therefore also the specs of what behavior your system supports and your documentation of how to use it.
Things that are not public can basically be deleted/re-factored at will with no impact to the behavior of the system. If you were to test those, you'd create a hard dependency from your unit test to that code, which would prevent you from doing refactoring on it. That's why you should not test anything else but public methods, fields and accessors.
Constants by design are not behavior, but axioms. A unit test that verifies a constant is itself a constant, so it would only be duplicated code and useless effort to write a test for constants.
So to answer your specific example:
should I create an unit test for a string formatting that's supossed
to be user-input?
Yes, absolutely. All methods which receive or send external input/output (which can be summed up as receiving IO), should be unit tested. This is probably the only case where I'd say non-public things that receive IO should also be unit tested. That's because I consider IO to be a public entry. Anything that's an entry point to an external actor I consider public.
So unit test public methods, public fields, public accessors, even when those are static constructs and also unit test anything which receives or sends data from an external actor, be it a user, a database, a protocol, etc.
NOTE: You can write temporary unit tests on non public things as a way for you to help make sure your implementation works. This is more of a way to help you figure out how to implement it properly, and to make sure your implementation works as you intend. After you've tested that it works though, you should delete the unit test or disable it from your test suite.
Kent Beck, in Extreme Programming Explained, said you only need to test the things that need to work in production.
That's a brusque way of encapsulating both test-driven development, where every change in production code is supported by a test that fails when the change is not present; and You Ain't Gonna Need It, which says there's no value in creating general-purpose classes for applications that only deal with a couple of specific cases.
I think you have to change your point of view.
In a pure form TDD requires the red-green-refactor workflow:
write test (it must fail) RED
write code to satisfy test GREEN
refactor your code
So the question "What I have to test?" has a response like: "You have to write a test that correspond to a feature or a particular requirements".
In this way you get must code coverage and also a better code design (remember that TDD stands also for Test Driven "Design").
Generally speaking you have to test ALL public method/interfaces.
should I create an unit test for a string formatting that's supossed
to be user-input? Or is it just wasting my time while I just can check
it in the actual code?
Not sure I understand what you mean, but the tests you write in TDD are supposed to test your production code. They aren't tests that check user input.
To put it another way, there can be TDD unit tests that test the user input validation code, but there can't be TDD unit tests that validate the user input itself.

Unit test documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I would like to know from those who document unit tests how they are documenting it. I understand that most TDD followers claim "the code speaks" and thus test documentation is not very important because code should be self-descriptive. Fair enough, but I would like to know how to document unit tests, not whether to document them at all.
My experience as a developer tells me that understanding old code (this includes unit tests) is difficult.
So what is important in a test documentation? When is the test method name not descriptive enough so that documentation is justified?
As requested by Thorsten79, I'll elaborate on my comments as an answer. My original comment was:
"The code speaks" is unfortunately
completely wrong, because a
non-developer cannot read the code,
while he can at least partially read
and understand generated
documentation, and this way he can
know what the tests test. This is
especially important in cases where
the customer fully understands the
domain and just can't read code, and
gets even more important when the unit
tests also test hardware, like in the
embedded world, because then you test
things that can be seen.
When you're doing unit tests, you have to know whether you're writing them just for you (or for your co-workers), or if you're also writing them for other people. Many times, you should be writing code for your readers, rather than for your convenience.
In mixed hardware/software development like in my company, the customers know what they want. If their field device has to do a reset when receiving a certain bus command, there must be a unit test that sends that command and checks whether the device was reset. We're doing this here right now with NUnit as the unit test framework, and some custom software and hardware that makes sending and receiving commands (and even pressing buttons) possible. It's great, because the only alternative would be to do all that manually.
The customer absolutely wants to know which tests are there, and he even wants to run the tests himself. If the tests are not properly documented, he doesn't know what the test does and can't check if all tests he think he'll need are there, and when running the test, he doesn't know what it will do. Because he can't read the code. He knows the used bus system better than our developers, but they just can't read the code. If a test fails, he does not know why and cannot even say what he thinks the test should do. That's not a good thing.
Having documented the unit tests properly, we have
code documentation for the developers
test documentation for the customer, which can be used to prove that the device does what it should do, i.e. what the customer ordered
the ability to generate the documentation in any format, which can even be passed to other involved parties, like the manufacturer
Properly in this context means: Write clear language that can be understood by non-developers. You can stay technical, but don't write things only you can understand. The latter is of course also important for any other comments and any code.
Independent of our exact situation, I think that's what I would want in unit tests all the time, even if they're pure software. A customer can ignore a unit test he doesn't care about, like basic function tests. But just having the docs there does never hurt.
As I've written in a comment to another answer: In addition, the generated documentation is also a good starting point if you (or your boss, or co-worker, or the testing department) wants to examine which tests are there and what they do, because you can browse it without digging through the code.
In the test code itself:
With method level comments explaining
what the test is testing / covering.
At the class level, a comment indicating the actual class being tested (which could actually be inferred from the test class name so that's actually less important than the comments at the method level).
With test coverage reports
Such as Cobertura. That's also documentation, since it indicates what your tests are covering and what they're not.
Comment complex tests or scenarios if required but favour readable tests in the first place.
On the other hand, I try and make my tests speak for themselves. In other words:
[Test]
public void person_should_say_hello() {
// Arrange.
var person = new Person();
// Act.
string result = person.SayHello();
// Assert.
Assert.AreEqual("Hello", result, "Person did not say hello");
}
If I was to look at this test I'd see it used Person (though it would be in PersonTest.cs as a clue ;)) then that if anything breaks it will occur in the SayHello method. The assert message is useful as well, not only for reading tests but when tests are run it's easier to see them in GUI's.
Following the AAA style of Arrange, Act and Assert makes the test essentially document itself. If this test was more complex, you could add comments above the test function explaining what's going on. As always, you should ensure these are kept up to date.
As a side note, using underscore notation for test names makes them much more readably, compare this to:
public void PersonShouldSayHello()
Which for long method names, can make reading the test more difficult. Though this point is often subjective.
When I come back at an old test and don't understand it right away
I refactor if possible
or write that comment that would have made me understand it right away
When you are writing your testcases it is the same as when you are writing your code, everyhting is crystal clear to you. That makes it difficult to envision what you should write to make the code clearer.
Note that this does not mean I never write any comments. There still are plenty of situations when I just know that I will going to have a hard time figuring out what a particular piece of code does.
I usually start with point 1 in these situations...
Improving the unit tests as executable specification is the point of Behaviour-Driven Development : BDD is an evolution of TDD where unit-tests use an Ubiquitous Language (a language based on the business domain and shared by the developers and the stakeholders) and expressive names (testCannotCreateDuplicateEntry) to describe what the code is supposed to do. Some BDD frameworks pushed the idea very far, and show executable written with almost natural language, for example.
I would advice against any detailed documentation separate from code. Why? Because whenever you need it, it will most likely be very outdated. The best place for detailed documentation is the code itself (including comments). BTW, anything you need to say about a specific unit test is very detailed documentation.
A few pointers on how to achieve well self-documented tests:
Follow a standard way to write all tests, like AAA pattern. Use a blank line to separate each part. That makes it much easier for the reader to identify the important bits.
You should include, in every test name: what is being tested, the situation under test and the expected behavior. For example: test__getAccountBalance__NullAccount__raisesNullArgumentException()
Extract out common logic into set up/teardown or helper methods with descriptive names.
Whenever possible use samples from real data for input values. This is much more informative than blank objects or made up JSON.
Use variables with descriptive names.
Think about your future you/teammate, if you remembered nothing about this, would you like any additional information when the test fails? Write that down as comments.
And to complement what other answers have said:
It's great if your customer/Product Owner/boss has a very good idea as to what should be tested and is eager to help, but unit tests are not the best place to do it. You should use acceptance tests for this.
Unit tests should cover specific units of code (methods/functions within classes/modules), if you cover more ground, they will quickly turn into integration tests, which are fine and needed too, but if you do not separate them specifically, people will just get them confused and you will loose some of the benefits of unit testing. For example, when a unit test fails you should get instant bug detection (specially if you follow the naming convention above). When an integration test fails, you know there is a problem, and you know some of its effects, but you might need to debug, sometimes for a long time, to find what it is.
You can use unit testing frameworks for integration tests if you want, but you should know you are not doing unit testing, and you should keep them in separate files/directories.
There are good acceptance/behavior testing frameworks (FitNesse, Robot, Selenium, Cucumber, etc.) that can help business/domain people not just read, but also write the tests themselves. Sure, they will need help from coders to get them to work (specially when starting out), but they will be able to do it, and they do not need to know anything about your modules or classes of functions.

Refactoring and Test Driven Development

I'm Currently reading two excellent books "Working Effectively with Legacy Code" and "Clean Code".
They are making me think about the way I write and work with code in completely new ways but one theme that is common among them is test driven development and the idea of smothering everything with tests and having tests in place before you make a change or implement a new piece of functionality.
This has led to two questions:
Question 1:
If I am working with legacy code. According to the books I should put tests in place to ensure I'm not breaking anything. Consider that I have a method 500 lines long. I would assume I'll have a set of equivalent testing methods to test that method. When I split this function up, do I create new tests for each new method/class that results?
According to "Clean Code" any test that takes longer than 1/10th of a second is a test that takes too long. Trying to test a 500 long line legacy method that goes to databases and does god knows what else could well take longer than 1/10th of a second. While I understand you need to break dependencies what I'm having trouble with is the initial test creation.
Question 2:
What happens when the code is re-factored so much that structurally it no longer resembles the original code (new parameters added/removed to methods etc). It would follow that the tests will need re-factoring also? In that case you could potentially altering the functionality of the system while the allowing the tests to keep passing? Is re-factoring tests an appropriate thing to do in this circumstance?
While its ok to plod on with assumptions I was wondering whether there are any thoughts/suggestions on such matters from a collective experience.
That's the deal when working with legacy code. Legacy meaning a system with no tests and which is tightly coupled. When adding tests for that code, you are effectively adding integration tests. When you refactor and add the more specific test methods that avoid the network calls, etc those would be your unit tests. You want to keep both, just have then separate, that way most of your unit tests will run fast.
You do that in really small steps. You actually switch continually between tests and code, and you are correct, if you change a signature (small step) related tests need to be updated.
Also check my "update 2" on How can I improve my junit tests. It isn't about legacy code and dealing with the coupling it already has, but on how you go about writing logic + tests where external systems are involved i.e. databases, emails, etc.
The 0.1s unit test run time is fairly silly. There's no reason unit tests shouldn't use a network socket, read a large file or other hefty operations if they have to. Yes it's nice if the tests run quickly so you can get on with the main job of writing the application but it's much nicer to end up with the best result at the end and if that means running a unit test that takes 10s then that's what I'd do.
If you're going to refactor the key is to spend as much time as you need to understand the code you are refactoring. One good way of doing that would be to write a few unit tests for it. As you grasp what certain blocks of code are doing you could refactor it and then it's good practice to write tests for each of your new methods as you go.
Yes, create new tests for new methods.
I'd see the 1/10 of a second as a goal you should strive for. A slower test is still much better than no test.
Try not to change the code and the test at the same time. Always take small steps.
When you've got a lengthy legacy method that does X (and maybe Y and Z because of its size), the real trick is not breaking the app by 'fixing' it. The tests on the legacy app have preconditions and postconditions and so you've got to really know those before you go breaking it up. The tests help to facilitate that. As soon as you break that method into two or more new methods, obviously you need to know the pre/post states for each of those and so tests for those 'keep you honest' and let you sleep better at night.
I don't tend to worry too much about the 1/10th of a second assertion. Rather, the goal when I'm writing unit tests is to cover all my bases. Obviously, if a test takes a long time, it might be because what is being tested is simply way too much code doing way too much.
The bottom line is that you definitely don't want to take what is presumably a working system and 'fix' it to the point that it works sometimes and fails under certain conditions. That's where the tests can help. Each of them expects the world to be in one state at the beginning of the test and a new state at the end. Only you can know if those two states are correct. All the tests can 'pass' and the app can still be wrong.
Anytime the code gets changed, the tests will possibly change and new ones will likely need to be added to address changes made to the production code. Those tests work with the current code - doesn't matter if the parameters needed to change, there are still pre/post conditions that have to be met. It isn't enough, obviously, to just break up the code into smaller chunks. The 'analyst' in you has to be able to understand the system you are building - that's job one.
Working with legacy code can be a real chore depending on the 'mess' you start with. I really find that knowing what you've got and what it is supposed to do (and whether it actually does it at step 0 before you start refactoring it) is key to a successful refactoring of the code. One goal, I think, is that I ought to be able to toss out the old stuff, stick my new code in its place and have it work as advertised (or better). Depending on the language it was written in, the assumptions made by the original author(s) and the ability to encapsulate functionality into containable chunks, it can be a real trick.
Best of luck!
Here's my take on it:
No and yes. First things first is to have a unit test that checks the output of that 500 line method. And then that's only when you begin thinking of splitting it up. Ideally the process will go like this:
Write a test for the original legacy 500-line behemoth
Figure out, marking first with comments, what blocks of code you could extract from that method
Write a test for each block of code. All will fail.
Extract the blocks one by one. Concentrate on getting all the methods go green one at a time.
Rinse and repeat until you've finished the whole thing
After this long process you will realize that it might make sense that some methods be moved elsewhere, or are repetitive and several can be reduced to a single function; this is how you know that you succeeded. Edit tests accordingly.
Go ahead and refactor, but as soon as you need to change signatures make the changes in your test first before you make the change in your actual code. That way you make sure that you're still making the correct assertions given the change in method signature.
Question 1: "When I split this function up, do I create new tests for each new method/class that results?"
As always the real answer is it depends. If it is appropriate, it may be simpler when refactoring some gigantic monolithic methods into smaller methods that handle different component parts to set your new methods to private/protected and leave your existing API intact in order to continue to use your existing unit tests. If you need to test your newly split off methods, sometimes it is advantageous to just mark them as package private so that your unit testing classes can get at them but other classes cannot.
Question 2: "What happens when the code is re-factored so much that structurally it no longer resembles the original code?"
My first piece of advice here is that you need to get a good IDE and have a good knowledge of regular expressions - try to do as much of your refactoring using automated tools as possible. This can help save time if you are cautious enough not to introduce new problems. As you said, you have to change your unit tests - but if you used good OOP principals with the (you did right?), then it shouldn't be so painful.
Overall, it is important to ask yourself with regards to the refactor do the benefits outweigh the costs? Am I just fiddling around with architectures and designs? Am I doing a refactor in order to understand the code and is it really needed? I would consult a coworker who is familiar with the code base for their opinion on the cost/benefits of your current task.
Also remember that the theoretical ideal you read in books needs to be balanced with real world business needs and time schedules.

How deep are your unit tests?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
The thing I've found about TDD is that its takes time to get your tests set up and being naturally lazy I always want to write as little code as possible. The first thing I seem do is test my constructor has set all the properties but is this overkill?
My question is to what level of granularity do you write you unit tests at?
..and is there a case of testing too much?
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.
Write unit tests for things you expect to break, and for edge cases. After that, test cases should be added as bug reports come in - before writing the fix for the bug. The developer can then be confident that:
The bug is fixed;
The bug won't reappear.
Per the comment attached - I guess this approach to writing unit tests could cause problems, if lots of bugs are, over time, discovered in a given class. This is probably where discretion is helpful - adding unit tests only for bugs that are likely to re-occur, or where their re-occurrence would cause serious problems. I've found that a measure of integration testing in unit tests can be helpful in these scenarios - testing code higher up codepaths can cover the codepaths lower down.
Everything should be made as simple as
possible, but not simpler. - A. Einstein
One of the most misunderstood things about TDD is the first word in it. Test. That's why BDD came along. Because people didn't really understand that the first D was the important one, namely Driven. We all tend to think a little bit to much about the Testing, and a little bit to little about the driving of design. And I guess that this is a vague answer to your question, but you should probably consider how to drive your code, instead of what you actually are testing; that is something a Coverage-tool can help you with. Design is a quite bigger and more problematic issue.
To those who propose testing "everything": realise that "fully testing" a method like int square(int x) requires about 4 billion test cases in common languages and typical environments.
In fact, it's even worse than that: a method void setX(int newX) is also obliged not to alter the values of any other members besides x -- are you testing that obj.y, obj.z, etc. all remain unchanged after calling obj.setX(42);?
It's only practical to test a subset of "everything." Once you accept this, it becomes more palatable to consider not testing incredibly basic behaviour. Every programmer has a probability distribution of bug locations; the smart approach is to focus your energy on testing regions where you estimate the bug probability to be high.
The classic answer is "test anything that could possibly break". I interpret that as meaning that testing setters and getters that don't do anything except set or get is probably too much testing, no need to take the time. Unless your IDE writes those for you, then you might as well.
If your constructor not setting properties could lead to errors later, then testing that they are set is not overkill.
I write tests to cover the assumptions of the classes I will write. The tests enforce the requirements. Essentially, if x can never be 3, for example, I'm going to ensure there is a test that covers that requirement.
Invariably, if I don't write a test to cover a condition, it'll crop up later during "human" testing. I'll certainly write one then, but I'd rather catch them early. I think the point is that testing is tedious (perhaps) but necessary. I write enough tests to be complete but no more than that.
Part of the problem with skipping simple tests now is in the future refactoring could make that simple property very complicated with lots of logic. I think the best idea is that you can use Tests to verify requirements for the module. If when you pass X you should get Y back, then that's what you want to test. Then when you change the code later on, you can verify that X gives you Y, and you can add a test for A gives you B, when that requirement is added later on.
I've found that the time I spend during initial development writing tests pays off in the first or second bug fix. The ability to pick up code you haven't looked at in 3 months and be reasonably sure your fix covers all the cases, and "probably" doesn't break anything is hugely valuable. You also will find that unit tests will help triage bugs well beyond the stack trace, etc. Seeing how individual pieces of the app work and fail gives huge insight into why they work or fail as a whole.
In most instances, I'd say, if there is logic there, test it. This includes constructors and properties, especially when more than one thing gets set in the property.
With respect to too much testing, it's debatable. Some would say that everything should be tested for robustness, others say that for efficient testing, only things that might break (i.e. logic) should be tested.
I'd lean more toward the second camp, just from personal experience, but if somebody did decide to test everything, I wouldn't say it was too much... a little overkill maybe for me, but not too much for them.
So, No - I would say there isn't such a thing as "too much" testing in the general sense, only for individuals.
Test Driven Development means that you stop coding when all your tests pass.
If you have no test for a property, then why should you implement it? If you do not test/define the expected behaviour in case of an "illegal" assignment, what should the property do?
Therefore I'm totally for testing every behaviour a class should exhibit. Including "primitive" properties.
To make this testing easier, I created a simple NUnit TestFixture that provides extension points for setting/getting the value and takes lists of valid and invalid values and has a single test to check whether the property works right. Testing a single property could look like this:
[TestFixture]
public class Test_MyObject_SomeProperty : PropertyTest<int>
{
private MyObject obj = null;
public override void SetUp() { obj = new MyObject(); }
public override void TearDown() { obj = null; }
public override int Get() { return obj.SomeProperty; }
public override Set(int value) { obj.SomeProperty = value; }
public override IEnumerable<int> SomeValidValues() { return new List() { 1,3,5,7 }; }
public override IEnumerable<int> SomeInvalidValues() { return new List() { 2,4,6 }; }
}
Using lambdas and attributes this might even be written more compactly. I gather MBUnit has even some native support for things like that. The point though is that the above code captures the intent of the property.
P.S.: Probably the PropertyTest should also have a way of checking that other properties on the object didn't change. Hmm .. back to the drawing board.
I make unit test to reach the maximum feasible coverage. If I cannot reach some code, I refactor until the coverage is as full as possible
After finished to blinding writing test, I usually write one test case reproducing each bug
I'm used to separate between code testing and integration testing. During integration testing, (which are also unit test but on groups of components, so not exactly what for unit test are for) I'll test for the requirements to be implemented correctly.
So the more I drive my programming by writing tests, the less I worry about the level of granuality of the testing. Looking back it seems I am doing the simplest thing possible to achieve my goal of validating behaviour. This means I am generating a layer of confidence that my code is doing what I ask to do, however this is not considered as absolute guarantee that my code is bug free. I feel that the correct balance is to test standard behaviour and maybe an edge case or two then move on to the next part of my design.
I accept that this will not cover all bugs and use other traditional testing methods to capture these.
Generally, I start small, with inputs and outputs that I know must work. Then, as I fix bugs, I add more tests to ensure the things I've fixed are tested. It's organic, and works well for me.
Can you test too much? Probably, but it's probably better to err on the side of caution in general, though it'll depend on how mission-critical your application is.
I think you must test everything in your "core" of your business logic. Getter ans Setter too because they could accept negative value or null value that you might do not want to accept. If you have time (always depend of your boss) it's good to test other business logic and all controller that call these object (you go from unit test to integration test slowly).
I don't unit tests simple setter/getter methods that have no side effects. But I do unit test every other public method. I try to create tests for all the boundary conditions in my algorthims and check the coverage of my unit tests.
Its a lot of work but I think its worth it. I would rather write code (even testing code) than step through code in a debugger. I find the code-build-deploy-debug cycle very time consuming and the more exhaustive the unit tests I have integrated into my build the less time I spend going through that code-build-deploy-debug cycle.
You didn't say why architecture you are coding too. But for Java I use Maven 2, JUnit, DbUnit, Cobertura, & EasyMock.
The more I read about it the more I think some unit tests are just like some patterns: A smell of insufficient languages.
When you need to test whether your trivial getter actually returns the right value, it is because you may intermix getter name and member variable name. Enter 'attr_reader :name' of ruby, and this can't happen any more. Just not possible in java.
If your getter ever gets nontrivial you can still add a test for it then.
Test the source code that makes you worried about it.
Is not useful to test portions of code in which you are very very confident with, as long as you don't make mistakes in it.
Test bugfixes, so that it is the first and last time you fix a bug.
Test to get confidence of obscure code portions, so that you create knowledge.
Test before heavy and medium refactoring, so that you don't break existing features.
This answer is more for figuring out how many unit tests to use for a given method you know you want to unit test due to its criticality/importance. Using Basis Path Testing technique by McCabe, you could do the following to quantitatively have better code coverage confidence than simple "statement coverage" or "branch coverage":
Determine Cyclomatic Complexity value of your method that you want to unit test (Visual Studio 2010 Ultimate for example can calculate this for you with static analysis tools; otherwise, you can calculate it by hand via flowgraph method - http://users.csc.calpoly.edu/~jdalbey/206/Lectures/BasisPathTutorial/index.html)
List the basis set of independent paths that flow thru your method - see link above for flowgraph example
Prepare unit tests for each independent basis path determined in step 2