I'm still getting to grips with the whole TDD concept. Should I be writing tests for properties? Should one only write tests on properties containing sufficient logic? Any thoughts or examples on this would be great.
Writing a test for public getter of a private field will not give you much if this getter does nothing except returning your private field. But if it does contain some logic (or just something that can fail, like converting your private Int32 field to Byte), testing such property starts to make sense.
Test things that have a reasonable chance of failure - you gain no extra confidence by testing properties with no logic beyond get/set.
Simple rule of thumb I use when doing TDD: always write tests that fail.
If a test fails at first it's a good test for TDD. It means something is not yet implemented, or not implemented as it should be. Then you can change code to make it pass. A test that succeeds at first is a bad test. You don't even know if it succeeded because you made some mistake writing the test, or because what you are testing is already working.
If you are able to produce a test failure using properties, then write tests for those properties. Typically you should begin by writing a test testing a setter or a getter before implementing it. A not implemented setter or getter can look trivial but makes test fail. And why would you write any line of code even a setter or getter if it's not driven by a test failure ?
Other kinds of tests, like tests as documentation showing how to use an API are also very useful, and good agile practice, but that is not TDD. TDD is about trying actively to break the code until you can't any more. Then you run functional tests and if you pushed unit tests hard enough, and if there is not some integration or system problem going into the way, all should be fine.
I usually write junit tests for accessors. It doesn't add much when they're written, except to keep coverage statistics pretty. But if someone adds "sufficient logic" later to the production code, the tests will already be in place to catch any mistakes.
Also it is the work of moments to write a test to check the value returned by a getter.
Instead of thinking of them as tests, think of each test as an example of how someone could use your code.
Instead of just testing properties, think about the behaviour which changes when the properties have different values, and give an example of the behaviour of the class in each meaningful context.
If it's really just a data property you can test by inspection, or with automated acceptance tests, or manually, maybe with a tester's help. Otherwise, don't worry about testing each method, or each property - just show how you can use the code and how you expect it to behave.
Think of TDD and Unit testes as a way to "see ahead", you write the test in a way that you feel would be best for your public signature, of your classes, to be.
Related
Since a few days ago I've started to feel interested in Unit Testing and TDD in C# and VS2010. I've read blog posts, watched youtube tutorials, and plenty more stuff that explains why TDD and Unit Testing are so good for your code, and how to do it.
But the biggest problem I find is, that I don't know what to check in my tests and what not to check.
I understand that I should check all the logical operations, problems with references and dependencies, but for example, should I create an unit test for a string formatting that's supossed to be user-input? Or is it just wasting my time while I just can check it in the actual code?
Is there any guide to clarify this problem?
In TDD every line of code must be justified by a failing test-case written before the code.
This means that you cannot develop any code without a test-case. If you have a line of code (condition, branch, assignment, expression, constant, etc.) that can be modified or deleted without causing any test to fail, it means this line of code is useless and should be deleted (or you have a missing test to support its existence).
That is a bit extreme, but this is how TDD works. That being said if you have a piece of code and you are wondering whether it should be tested or not, you are not doing TDD correctly. But if you have a string formatting routine or variable incrementation or whatever small piece of code out there, there must be a test case supporting it.
UPDATE (use-case suggested by Ed.):
Like for example, adding an object to a list and creating a test to see if it is really inside or there is a duplicate when the list shouldn't allow them.
Here is a counterexample, you would be surprised how hard it is to spot copy-paste errors and how common they are:
private Set<String> inclusions = new HashSet<String>();
private Set<String> exclusions = new HashSet<String>();
public void include(String item) {
inclusions.add(item);
}
public void exclude(String item) {
inclusions.add(item);
}
On the other hand testing include() and exclude() methods alone is an overkill because they do not represent any use-cases by themselves. However, they are probably part of some business use-case, you should test instead.
Obviously you shouldn't test whether x in x = 7 is really 7 after assignment. Also testing generated getters/setters is an overkill. But it is the easiest code that often breaks. All too often due to copy&paste errors or typos (especially in dynamic languages).
See also:
Mutation testing
Your first few TDD projects are going to probably result in worse design/redesign and take longer to complete as you are learning (at least in my experience). This is why you shouldn't jump into using TDD on a large critical project.
My advice is to use "pure" TDD (acceptance/unit test everything test-first) on a few small projects (100-10,000 LOC). Either do the side projects on your own or if you don't code in your free time, use TDD on small internal utility programs for your job.
After you do "pure" TDD on about 6-12 projects, you will start to understand how TDD affects design and learn how to design for testability. Once you know how to design for testability, you will need to TDD less and maximize the ROI of unit, regression, acceptance, etc. tests rather than test everything up front.
For me, TDD is more of teaching method for good code design than a practical methodology. However, I still TDD logic code and unit test instead of debug.
There is no simple answer to this question. There is the law of diminishing returns in action, so achieving perfect coverage is seldom worth it. Knowing what to test is a thing of experience, not rules. It’s best to consciously evaluate the process as you go. Did something break? Was it feasible to test? If not, is it possible to rewrite the code to make it more testable? Is it worth it to always test for such cases in the future?
If you split your code into models, views and controllers, you’ll find that most of the critical code is in the models, and those should be fairly testable. (That’s one of the main points of MVC.) If a piece of code is critical, I test it, even if it means that I would have to rewrite it to make it more testable. If a piece of code is easy to get wrong or get broken by future updates, it gets a test. I seldom test controllers and views, as it’s not proving worth the trouble for me.
The way I see it all of your code falls into one of three buckets:
Code that is easy to test: This includes your own deterministic public methods.
Code that is difficult to test: This includes GUI, non-deterministic methods, private methods, and methods with complex setup.
Code that you don't want to test: This includes 3rd party code, and code that is difficult to test and not worth the effort.
Of the three, you should focus on testing the easy code. The difficult to test code should be refactored so that into two parts: code that you don't want to test and easy code. And of course, you should test the refactored easy code.
I think you should only unit test entry points to behavior of the system. This include public methods, public accessors and public fields, but not constants (constant fields, enums, methods, etc.). It also includes any code which directly deals with IO, I explain why further below.
My reasoning is as follows:
Everything that's public is basically an entry point to a behavior of the system. A unit test should therefore be written that guarantees that the expected behavior of that entry point works as required. You shouldn't test all possible ways of calling the entry point, only the ones that you explicitly require. Your unit tests are therefore also the specs of what behavior your system supports and your documentation of how to use it.
Things that are not public can basically be deleted/re-factored at will with no impact to the behavior of the system. If you were to test those, you'd create a hard dependency from your unit test to that code, which would prevent you from doing refactoring on it. That's why you should not test anything else but public methods, fields and accessors.
Constants by design are not behavior, but axioms. A unit test that verifies a constant is itself a constant, so it would only be duplicated code and useless effort to write a test for constants.
So to answer your specific example:
should I create an unit test for a string formatting that's supossed
to be user-input?
Yes, absolutely. All methods which receive or send external input/output (which can be summed up as receiving IO), should be unit tested. This is probably the only case where I'd say non-public things that receive IO should also be unit tested. That's because I consider IO to be a public entry. Anything that's an entry point to an external actor I consider public.
So unit test public methods, public fields, public accessors, even when those are static constructs and also unit test anything which receives or sends data from an external actor, be it a user, a database, a protocol, etc.
NOTE: You can write temporary unit tests on non public things as a way for you to help make sure your implementation works. This is more of a way to help you figure out how to implement it properly, and to make sure your implementation works as you intend. After you've tested that it works though, you should delete the unit test or disable it from your test suite.
Kent Beck, in Extreme Programming Explained, said you only need to test the things that need to work in production.
That's a brusque way of encapsulating both test-driven development, where every change in production code is supported by a test that fails when the change is not present; and You Ain't Gonna Need It, which says there's no value in creating general-purpose classes for applications that only deal with a couple of specific cases.
I think you have to change your point of view.
In a pure form TDD requires the red-green-refactor workflow:
write test (it must fail) RED
write code to satisfy test GREEN
refactor your code
So the question "What I have to test?" has a response like: "You have to write a test that correspond to a feature or a particular requirements".
In this way you get must code coverage and also a better code design (remember that TDD stands also for Test Driven "Design").
Generally speaking you have to test ALL public method/interfaces.
should I create an unit test for a string formatting that's supossed
to be user-input? Or is it just wasting my time while I just can check
it in the actual code?
Not sure I understand what you mean, but the tests you write in TDD are supposed to test your production code. They aren't tests that check user input.
To put it another way, there can be TDD unit tests that test the user input validation code, but there can't be TDD unit tests that validate the user input itself.
All,
While writing a test method for method A (which has many internal conditions), should I focus on testing a single condition each time? I am finding it difficult to structure a test method for method A that would cover all the code path in method A.
Can anyone please suggest me how to go about writing a test method.?
Do not feel the need to have one-test-per-method. Keep your unit tests fine-grained, descriptive, and easy to understand. If that means multiple, similar tests all calling the same target method, then do it.
For that matter, try and avoid the habit of systematically writing a unit test for each method. Unit testing should be well thought-out, not habitual. They should describe the behaviour of classes, rather than of individual methods.
you should write a test for every execution path. For example, if you have
if (cond1 || cond2) {....}
you should test for cond1 and cond2. Separate test methods are fine and encouraged, if it makes sense. Its ok to have
testMyMethodCond1(){...}
and
testMyMethodCond2(){...}
and whatever else you need.
Also, when you say your method has 'many internal conditions', maybe you want to refactor your code so some of those conditions are handled in other, smaller methods, that are easier to test.
My preference is to have one failure point per JUnit test method. This means that I will have many JUnit test methods per target class method that I'm testing.
In your example, I may have testA1(), testA2(), testA3() all testing the same method (A). Each of these would test a different success or failure condition of method A.
If there are 8 paths through method A, then you need at least 8 test methods calling it and maybe some for error handling conditions.
First of all, you should consider breaking your method into several methods if it has more than one responsibility. Secondly, I would advice you to write multiple tests for each method. Each test should cover a specific path through the method under testing. Each test should also (of course) give the expected outcome depending on your test data.
Tests should be simpler, preferably much simpler, than the thing tested. Otherwise the error is more likely to be in your test. So it's much better to have a lot of small simple test methods that execute small parts of your method than one big clunky one. (You can use a code coverage tool like Cobertura to verify that you're covering all paths in your method.)
After reading an interesting article about unit testing behavior instead of state, I came to realize that my unit tests often are tightly coupled to my code because I am using mocks.
I cannot image writing unit tests without mocks but the fact is that these mocks are coupling my unit test very much to my code because of the expect andReturn calls.
For example when I create a test that uses a mock, I record all calls to the specific mock and assign return values.
Now when I change the implementation of the actual code for whatever reason, a lot of tests break because that call was not expected by the mock, forcing me to update the unit test also, and effectively forcing me to implement every change twice...
This happens a lot.
Is this issue intrinsic to using mocks, and should I learn to live with it, or am I doing something fundamentally wrong?
Please enlighten me :)
Clear examples coming with the explanation are most welcome of course.
when I create a test that uses a mock,
I record all calls to the specific
mock and assign return values
It sounds like you may be over-specifying expectations.
Try to build as little setup code as possible into your tests: stub (rather than expect) all behavior that doesn't pertain to the current test and only specify return values that are absolutely needed to make your test work.
This answer includes a concise example (as well as an alternative, more detailed explanation).
My experience is to use mocks only at the bounderies of (sub)systems. If I have two classes that are strongly related I do not mock them appart but test them together. An example might be an composite and a visitor. If I test a concrete visitor I do not use a mock for the composite but create real composites. One might argue that this is not a unit test (depends on the definition of what is a unit). But that doesn't matter that much. What I try to achieve is:
Write readable tests (tests without mocks are most of the time more easy to read).
Test only a focused area of code (in the example the concreate visitor and the relevant part of the composite).
Write fast tests (as long as I instantiate only a few classes, in the example the concrete composites, this is not a concern ... watch for transitive creations).
Only if I encounter the boundary of a subsystem I use mocks. Example: I have a composite that can render itself to a renderer I would mock out the renderer if I test the render logic of the composite.
Testing behavior instead of state looks promosing at first but in general I would test state as the resulting tests are easiear to maintain. Mocks are a cannon. Don't crack a nut with a sledgehammer.
If you are fixing the tests because they break, you are not using them as intended.
If the behaviour of a method changes, in test driven development you would first change the test to expect the new behaviour, then implement the new behaviour.
Several good answers here already, but for me a good rule of thumb is to test the requirements of the method, not the implementation. Sometimes that may mean using a mock object because the interaction is the requirement, but you're usually better off testing the return value of the method or the change in state of the object.
In unit testing, the setup method is used to create the objects needed for testing.
In those setup methods, I like using assertions: I know what values I want to see in those
objects, and I like to document that knowledge via an assertion.
In a recent post on unit tests calling other unit tests here on stackoverflow, the general feeling seems to be that unit tests should not call other tests:
The answer to that question seems to be that you should refactor your setup, so
that test cases do not depend on each other.
But there isn't much difference in a "setup-with-asserts" and a
unit test calling other unit tests.
Hence my question: Is it good practice to have assertions in setup methods?
EDIT:
The answer turns out to be: this is not a good practice in general. If the setup results need to be tested, it is recommended to add a separate test method with the assertions (the answer I ticked); for documenting intent, consider using Java asserts.
Instead of assertions in the setup to check the result, I used a simple test (a test method along the others, but positionned as first test method).
I have seen several advantages:
The setup keeps short and focused, for readability.
The assertions are run only once, which is more efficient.
Usage and discussion :
For example, I name the method testSetup().
To use it, when I have some test errors in that class, I know that if testSetup() has an error, I don't need to bother with the other errors, I need to fix this one first.
If someone is bothered by this, and wants to make this dependency explicit, the testSetup() could be called in the setup() method. But I don't think it matters. My point is that, in JUnit, you can already have something similar in the rest of your tests:
some tests that test local code,
and some tests that is calls more global code, which indirectly calls the same code as the previous test.
When you read the test result where both fail, you already have to take care of this dependency that is not in the test, but in the code being called. You have to fix the simple test first, and then rerun the global test to see if it still fails.
This is the reason why I'm not bothered by the implicit dependency I explained before.
Having assertions in the Setup/TearDown methods is not advisable. It makes the test less readable if the user needs to "understand" that some of the test logic is not in the test method.
There are times when you do not have a choice but to use the setup/teardown methods for something other than what they where intended for.
There is a bigger issue in this question: a test that calls another test, it is a smell for some problem in your tests.
Each test should test a specific aspect of your code and should only have one or two assertions in it, so if your test calls another test you might be testing too many things in that test.
For more information read: Unit Testing: One Test, One Assertion - Why It Works
They're different scenarios; I don't see the similarity.
Setup methods should contain code that is common to (ideally) all tests in a fixture. As such, there's nothing inherently wrong with putting asserts in a test setup method if certain things must be true before the rest of the test code executes. The setup is an extension of the test; it is part of the test as a whole. If the assert trips, people will discover which pre-requisite failed.
On the other hand, if the setup is complicated enough that you feel the need to assert it is correct, it may be a warning sign. Furthermore, if all tests do not require the setup's full output, then it is a sign that the fixture has poor cohesion and should be split up based on scenarios and/or refactored.
It's partly because of this that I tend to stay away from using Setup methods. Where possible, I use private factory methods or similar to set things up. It makes the test more readable and avoids confusion. Sometimes this is not practical (e.g. working with tightly coupled classes and/or when writing integration tests), but for the majority of my tests it does the job.
Follow your heart / Blink decisions. Asserts within a Setup method can document intent ; improver readability. So personally I'd back you up on this.
It is different from a test calling other tests - which is bad. No test isolation. A test should not influence the outcome of another test.
Although it is not a freq use-case, I sometimes use Asserts inside a Setup method so that I can know if test setup has not taken place as I intended it to; usually when I'm dealing with components that I didn't write myself. An Assertion failure which reads 'Setup failed!' in the errors tab - quickly helps me zone in on the setup code instead of having to look at a bunch of failed tests.
A Setup failure usually should cause all tests in that fixture to fail - which is a smell that your nose should soon pickup. 'All tests failed usually implies Setup broke ' So assertions are not always needed. That said be pragmatic, look at your specific context and 'Add to taste.'
I use Java asserts, rather than JUnit ones, in the cases where something like this is necessary. e.g. when you use some other utility class to set up test data.:
byte[] pkt = pktFactory.makePacket(TIME, 12, "23, F2");
assert pkt.length == 15;
Failing has the implication 'system is not in a state to even try to run this test'.
For classes that have several setters and getters besides other methods, is it reasonable to save time on writing unit tests for the accessors, taking into account that they will be called while testing the rest of the interface anyway?
I would only unit test them if they do more than set or return a variable. At some point, you need to trust that the compiler is going to generate the right program for you.
Absolutely. The idea of unit tests is to ensure that changes do not affect behavior in unknown ways. You might save some time by not writing a test for getFoo(). If you change the type of Foo to be something a little more complex then you could easily forget to test the accessor. If you are questioning whether you should write a test or not, you are better off writing it.
IMHO, if you are thinking about skipping adding tests for a method, you might want to ask yourself if the method is necessary or not. In interest of full disclosure, I am one of those people that only adds a setter or getter when it is proven necessary. You would be surprised how often you really don't need access to a specific member after construction or when you only want to give access to the result of some calculation that is really dependent on the member. But I digress.
A good mantra is to always add tests. If you don't think that you need one because the method is trivial, consider removing the method instead. I realize that the common advice is that it is okay to skip tests for "trivial" methods but you have to ask yourself if the method is even necessary. If you skip the test, you are saying that the method will always be trivial. Remember that unit tests also function as documentation of the intent and the contract offered. Hence tests of a trivial method state that the method is indeed meant to be trivial.
My criteria for testing is that every piece of code containing conditional logic (while, if, for, etc) be tested. If the accessors are simple getters/setters, I'd say testing them is wasting your time.
You don't have to write test for properties that contain no logic.
The only explanation to test simple properties is to boost test coverage - but it's just silly.
I think it's reasonable to save time and not write unit tests that you don't think will be particularly helpful.
While 100% test coverage is an admirable ideal, at some point you run into diminishing returns where the time you spent writing the test isn't worth the benefit you get out of having it.
You can always go back and add more unit tests later if you find situations where you decide they would be useful.
Our company has both kinds of people and opinions. I'm tending to not testing them specifically, as they are usually
automatically generated
tested in the context of another test (e.g. there's some other code making use of these accessors
not containing any code that might break
There are exceptions though:
When they are not simply generated 'getters' and 'setters'
When they are part of an important API that's just provided for other users and not really tested in the context you're currently in
Both these cases might cause me to test them. The first one more than the second.
No-friggin way!
Waste of time!
Even Bob Martin See SO podcast 41, the grandfather of Agile says no.
If your IDE generates and manages modifications for member accessors --- you wont' be doing anything special --- then testing them really isn't important; types will match up, naming will be by a template, etc.
I think most people will say testing them is a waste of your time. In the 99% case that is true. If there's a bug in an accessor and the rest of your unit tests don't catch it indirectly then I'd start questioning why that property is there at all.
On the other hand, testing an accessor takes less typing that asking this question :)
Personally I test them. But this is a gray area for me and I don't press other people in my group to test them as long as they have sufficient coverage around the functionality of the class.
Usually when I consider writing unit tests I ask myself the following:
Is the getter/setter accessing anything on the DAL (Data Access Layer)?
If so then I would include a unit test. Just in case because if at some point in the future you decide to implement lazy loading, or something more advanced than a simple get/set, then you'll need to make sure this is working properly.
Is it forseable that the getter/setter will throw an exception?
The best practice for getters is to not allow them to throw exceptions at all. Setter's are another matter. However, either way, if you decide that a property might possibly throw an exception, then write a unit test for that property, both for a successful access, and for purposefully generating the exception.
Other than that I wouldn't bother, as Dan pointed out, "At some point, you need to trust that the compiler is going to generate the right program for you."
I like to have unit tests for them. If an accessor does any kind of work besides simply return a field then that code will be tested appropriately.
Even if a given accessor doesn't do anything other than return a field, it might be modified later to do something extra.
Also, it's an easy way to up the number of tests being run, which many managers like.