Is unit-testing of accessors a must? - unit-testing

For classes that have several setters and getters besides other methods, is it reasonable to save time on writing unit tests for the accessors, taking into account that they will be called while testing the rest of the interface anyway?

I would only unit test them if they do more than set or return a variable. At some point, you need to trust that the compiler is going to generate the right program for you.

Absolutely. The idea of unit tests is to ensure that changes do not affect behavior in unknown ways. You might save some time by not writing a test for getFoo(). If you change the type of Foo to be something a little more complex then you could easily forget to test the accessor. If you are questioning whether you should write a test or not, you are better off writing it.
IMHO, if you are thinking about skipping adding tests for a method, you might want to ask yourself if the method is necessary or not. In interest of full disclosure, I am one of those people that only adds a setter or getter when it is proven necessary. You would be surprised how often you really don't need access to a specific member after construction or when you only want to give access to the result of some calculation that is really dependent on the member. But I digress.
A good mantra is to always add tests. If you don't think that you need one because the method is trivial, consider removing the method instead. I realize that the common advice is that it is okay to skip tests for "trivial" methods but you have to ask yourself if the method is even necessary. If you skip the test, you are saying that the method will always be trivial. Remember that unit tests also function as documentation of the intent and the contract offered. Hence tests of a trivial method state that the method is indeed meant to be trivial.

My criteria for testing is that every piece of code containing conditional logic (while, if, for, etc) be tested. If the accessors are simple getters/setters, I'd say testing them is wasting your time.

You don't have to write test for properties that contain no logic.
The only explanation to test simple properties is to boost test coverage - but it's just silly.

I think it's reasonable to save time and not write unit tests that you don't think will be particularly helpful.
While 100% test coverage is an admirable ideal, at some point you run into diminishing returns where the time you spent writing the test isn't worth the benefit you get out of having it.
You can always go back and add more unit tests later if you find situations where you decide they would be useful.

Our company has both kinds of people and opinions. I'm tending to not testing them specifically, as they are usually
automatically generated
tested in the context of another test (e.g. there's some other code making use of these accessors
not containing any code that might break
There are exceptions though:
When they are not simply generated 'getters' and 'setters'
When they are part of an important API that's just provided for other users and not really tested in the context you're currently in
Both these cases might cause me to test them. The first one more than the second.

No-friggin way!
Waste of time!
Even Bob Martin See SO podcast 41, the grandfather of Agile says no.

If your IDE generates and manages modifications for member accessors --- you wont' be doing anything special --- then testing them really isn't important; types will match up, naming will be by a template, etc.

I think most people will say testing them is a waste of your time. In the 99% case that is true. If there's a bug in an accessor and the rest of your unit tests don't catch it indirectly then I'd start questioning why that property is there at all.
On the other hand, testing an accessor takes less typing that asking this question :)
Personally I test them. But this is a gray area for me and I don't press other people in my group to test them as long as they have sufficient coverage around the functionality of the class.

Usually when I consider writing unit tests I ask myself the following:
Is the getter/setter accessing anything on the DAL (Data Access Layer)?
If so then I would include a unit test. Just in case because if at some point in the future you decide to implement lazy loading, or something more advanced than a simple get/set, then you'll need to make sure this is working properly.
Is it forseable that the getter/setter will throw an exception?
The best practice for getters is to not allow them to throw exceptions at all. Setter's are another matter. However, either way, if you decide that a property might possibly throw an exception, then write a unit test for that property, both for a successful access, and for purposefully generating the exception.
Other than that I wouldn't bother, as Dan pointed out, "At some point, you need to trust that the compiler is going to generate the right program for you."

I like to have unit tests for them. If an accessor does any kind of work besides simply return a field then that code will be tested appropriately.
Even if a given accessor doesn't do anything other than return a field, it might be modified later to do something extra.
Also, it's an easy way to up the number of tests being run, which many managers like.

Related

Is it good practice to unit test properties?

I'm still getting to grips with the whole TDD concept. Should I be writing tests for properties? Should one only write tests on properties containing sufficient logic? Any thoughts or examples on this would be great.
Writing a test for public getter of a private field will not give you much if this getter does nothing except returning your private field. But if it does contain some logic (or just something that can fail, like converting your private Int32 field to Byte), testing such property starts to make sense.
Test things that have a reasonable chance of failure - you gain no extra confidence by testing properties with no logic beyond get/set.
Simple rule of thumb I use when doing TDD: always write tests that fail.
If a test fails at first it's a good test for TDD. It means something is not yet implemented, or not implemented as it should be. Then you can change code to make it pass. A test that succeeds at first is a bad test. You don't even know if it succeeded because you made some mistake writing the test, or because what you are testing is already working.
If you are able to produce a test failure using properties, then write tests for those properties. Typically you should begin by writing a test testing a setter or a getter before implementing it. A not implemented setter or getter can look trivial but makes test fail. And why would you write any line of code even a setter or getter if it's not driven by a test failure ?
Other kinds of tests, like tests as documentation showing how to use an API are also very useful, and good agile practice, but that is not TDD. TDD is about trying actively to break the code until you can't any more. Then you run functional tests and if you pushed unit tests hard enough, and if there is not some integration or system problem going into the way, all should be fine.
I usually write junit tests for accessors. It doesn't add much when they're written, except to keep coverage statistics pretty. But if someone adds "sufficient logic" later to the production code, the tests will already be in place to catch any mistakes.
Also it is the work of moments to write a test to check the value returned by a getter.
Instead of thinking of them as tests, think of each test as an example of how someone could use your code.
Instead of just testing properties, think about the behaviour which changes when the properties have different values, and give an example of the behaviour of the class in each meaningful context.
If it's really just a data property you can test by inspection, or with automated acceptance tests, or manually, maybe with a tester's help. Otherwise, don't worry about testing each method, or each property - just show how you can use the code and how you expect it to behave.
Think of TDD and Unit testes as a way to "see ahead", you write the test in a way that you feel would be best for your public signature, of your classes, to be.

How simple is 'too simple to break'? - explained

In JUnit FAQ you can read that you shouldn't test methods that are too simple to break. While all examples seem logical (getters and setters, delegation etc.), I'm not sure I am able to grasp the "can't break on its own" concept in full. When would you say that the method "can't break on its own"? Anyone care to elaborate?
I think "can't break on its own" means that the method only uses elements of its own class, and does not depend upon the behavior of any other objects/classes, or that it delegates all of its functionality to some other method or class (which presumably has its own tests).
The basic idea is that if you can see everything the method does, without needing to refer to other methods or classes, and you are pretty sure it is correct, then a test is probably not necessary.
There is not necessarily a clear line here. "Too simple to break" is in the eye of the beholder.
Try thinking of it this way. You're not really testing methods. You're describing some behaviour and giving some examples of how to use it, so that other people (including your later self) can come and change that behaviour safely later. The examples happen to be executable.
If you think that people can change your code safely, you don't need to worry.
No matter how simple a method is, it can still be wrong. For example you might have two similarly named variables and access the wrong one. However, these errors will likely be quickly found and once these methods are written correctly, they are going to stay correct and so it is not worthwhile permanently keeping around a test for this. Rather than "too simple to break", I would recommend considering whether it is too simple to be worth keeping a permanent test.
Put it this way, you're building a wood table.
You'll test things that may fail. For instance, putting a jar in the table, or sitting over the table, or pushing the table from one side to another in the room, etc. You're testing table, in a way you know it is somehow vulnerable or at least in a way you know you'll use it.
You don't test though, nails, or one of its legs, because they are "too simple to break on its own".
Same goes for unit testing, you don't test getters/setters, because the only way they may fail, it because the runtime environment fail. You don't test methods that forward the message to other methods, because they are too simple to break on it own, you better test the referenced method.
I hope this helps.
If you are using TDD, that advice is wrong. In the TDD case your function exists because the test failed. It doesn't matter if the function is simple or not.
But if you are adding tests afterwards to some existing code, I can sort of understand the argument that you shouldn't need to test code that cannot break. But I still think that is just an excuse for not having to write tests. Also ask yourself: if that piece of code is not worthy of testing, then maybe that code is not needed at all?
I like the risk based approach of GAMP 5 which basically means (in this context) to first asses the various possible risks of a software and only define tests for the higher-risk parts.
Although this applies to GxP environments, it can be adapted in the way: How likely is a certain class to have erroneous methods, and how big is the impact an error would have? E.g., if a method decides whether to give a user access to a resource, you must of course test that extensively enough.
That means in deterimining where to draw the line between "too simple to break" it can help to take into consideration the possible consequences of a potential flaw.

TDD - top level function has too many mocks. Should I even bother testing it?

I have a .NET application with a web front-end, WCF Windows service back-end. The application is fairly simple - it takes some user input, sending it to the service. The service does this - takes the input (Excel spreadsheet), extracts the data items, checks SQL DB to make sure the items are not already existing - if they do not exist, we make a real-time request to a third party data vendor and retrieve the results, inserting them into the database. It does some logging along the way.
I have a Job class with a single public ctor and public Run() method. The ctor takes all the params, and the Run() method does all of the above logic. Each logical piece of functionality is split into a separate class - IParser does file parsing, IConnection does the interaction with the data vendor, IDataAccess does the data access, etc. The Job class has private instances of these interfaces, and uses DI to construct the actual implementations by default, but allows the class user to inject any interface.
In the real code, I use the default ctor. In my unit tests for the Run() method, I use all mock objects creating via NMock2.0. This Run() method is essentially the 'top level' function of this application.
Now here's my issue / question: the unit tests for this Run() method are crazy. I have three mock objects I'm sending into the ctor, and each mock object sets expectations on themselves. At the end I verify. I have a few different flows that the Run method can take, each flow having its own test - it could find everything is already in the database and not make a request to vendor... or an exception could be thrown and the job status could be set to 'failed'... OR we can have the case where we didn't have the data and needed to make the vendor request (so all those function calls would need to be made).
Now - before you yell at me and say 'your Run() method is too complicated!' - this Run method is only a mere 50 lines of code! (It does make calls to some private function; but the entire class is only 160 lines). Since all the 'real' logic is being done in the interfaces that are declared on this class. however, the biggest unit test on this function is 80 lines of code, with 13 calls to Expect.BLAH().. _
This makes re-factoring a huge pain. If I want to change this Run() method around, I have to go edit my three unit tests and add/remove/update Expect() calls. When I need to refactor, I end up spending more time creating my mock calls than I did actually writing the new code. And doing real TDD on this function makes it even more difficult if not impossible. It's making me think that it's not even worth unit testing this top level function at all, since really this class isn't doing much logic, it's just passing around data to its composite objects (which are all fully unit tested and don't require mocking).
So - should I even bother testing this high level function? And what am I gaining by doing this? Or am I completely misusing mock/stub objects here? Perhaps I should scrap the unit tests on this class, and instead just make an automated integration test, which uses the real implementations of the objects and Asserts() against SQL Queries to make sure the right end-state data exists? What am I missing here?
EDIT: Here's the code - the first function is the actual Run() method - then my five tests which test all five possible code paths. I changed it some for NDA reasons but the general concept is still there. Anything you see wrong with how I'm testing this function, any suggestions on what to change to make it better? Thanks.
I guess my advice echos most of what is posted here.
It sounds as if your Run method needs to be broken down more. If its design is forcing you into tests that are more complicated than it is, something is wrong. Remember this is TDD we're talking about, so your tests should dictate the design of your routine. If that means testing private functions, so be it. No technological philosophy or methodology should be so rigid that you can't do what feels right.
Additionally, I agree with some of the other posters, that your tests should be broken down into smaller segments. Ask yourself this, if you were going to be writting this app for the first time and your Run function didn't yet exist, what would your tests look like? That response is probably not what you have currently (otherwise you wouldn't be asking the question). :)
The one benefit you do have is that there isn't a lot of code in the class, so refactoring it shouldn't be very painful.
EDIT
Just saw you posted the code and had some thoughts (no particular order).
Way too much code (IMO) inside your SyncLock block. The general rule is to keep the code to a minimal inside a SyncLock. Does it ALL have to be locked?
Start breaking code out into functions that can be tested independently. Example: The ForLoop that removes ID's from the List(String) if they exist in the DB. Some might argue that the m_dao.BeginJob call should be in some sort of GetID function that can be tested.
Can any of the m_dao procedures be turned into functions that can tested on their own? I would assume that the m_dao class has it's own tests somewhere, but by looking at the code it appears that that might not be the case. They should, along with the functionality in the m_Parser class. That will relieve some of the burden of the Run tests.
If this were my code, my goal would be to get the code to a place where all the individual procedure calls inside Run are tested on their own and that the Run tests just test the final out come. Given input A, B, C: expect outcome X. Give input E, F, G: expect Y. The detail of how Run gets to X or Y is already tested in the other procedures' tests.
These were just my intial thoughts. I'm sure there are a bunch of different approaches one could take.
Two thoughts: first you should have an integration test anyway to make sure everything hangs together. Second, it sounds to me like you're missing intermediate objects. In my world, 50 lines is a long method. It's hard to say anything more precise without seeing the code.
The first thing I would try would be refactroing your unit tests to share the set up code between tests by refactoring to a method that sets up the mocks and expectations. Parameterize the method so your expectations are configurable. You may need one or perhaps more of these set up methods depending on how much alike the set up is from test to test.
So - should I even bother testing this
high level function?
Yes. If there are different code-paths, you should be.
And what am I gainging by doing this?
Or am I completely mis-using mock/stub
objects here?
As J.B. pointed out (Nice seeing you at AgileIndia2010!), Fowler's article is recommended read. As a gross simplification: Use Stubs, when you don't care about the values returned by the collaborators. If you the return values from the collaborator.call_method() changes the behavior(or you need non trivial checks on args, computation for return values), you need mocks.
Suggested refactorings:
Try moving the creation and injection of mocks into a common Setup method. Most unit testing frameworks support this; will be called before each test
Your LogMessage calls are beacons - calling out once again for intention revealing methods. e.g. SubmitBARRequest(). This will shorten your production code.
Try n move each Expect.Blah1(..) into intention revealing methods.
This will shorten your test code and make it immensely readable and easier to modify. e.g.
Replace all instances of
.
Expect.Once.On(mockDao) _
.Method("BeginJob") _
.With(New Object() {submittedBy, clientID, runDate, "Sent For Baring"}) _
.Will([Return].Value(0));
with
ExpectBeginJobOnDAO_AndReturnZero(); // you can name it better
on whether to test such function: you said in a comment
" the tests read just like the actual
function, and since im using mocks,
its only asserting the functions are
called and sent params (i can check
this by eyeballing the 50 line
function)"
imho eyeballing the function isn't enough, haven't you heard: "I can't believe I missed that!" ... you have a fair amount of scenarios that could go wrong in that Run method, covering that logic is a good idea.
on tests being brittle: try having some shared methods that you can use in the test class for the common scenarios. If you are concerned about a later change breaking all the tests, put the pieces that concerned you in specific methods that can be changed if needed.
on tests being too long / hard to know what's in there: don't test single scenarios with every single assertion that its related to it. Break it up, test stuff like it should log x messages when y happens (1 test), it should save to the db when y happens (another separate test), it should send a request to a third party when z happens (yet another test), etc.
on doing integration/system tests instead of these unit tests: you can see from your current situation that there are plenty of scenarios & little variations involved in that part of your system. That's with the shield of replacing yet more logic with those mocks & the ease of simulating different conditions. Doing the same with the whole thing will add a whole new level of complexity to your scenario, something that is surely unmanageable if you want to cover a wide set of scenarios.
imho you should minimize the combinations that you are leaving for your system tests, exercising some main scenarios should already tell you that a Lot of the system is working correctly - it should be a lot about everything being hooked correctly.
The above said, I do recommend adding focused integration tests for all the integration code you have that might not be currently covered by your tests / since by definition unit tests don't get there. This exercises specifically the integration code with all the variations you expect from it - the corresponding tests are much simpler than trying to reach those behaviors in the context of the whole system & tell you very quickly if any assumptions in those pieces is causing trouble.
If you think unit-tests are too hard, do this instead: add post-conditions to the Run method. Post-conditions are when you make an assertion about the code. For example, at the end of that method, you may want some variable to hold a particular value or one value out of some possible choices.
After, you can derive your pre-conditions for the method. This is basically the data type of each parameter and the limits and constraints on each of those parameters (and on any other variable initialized at the beginning of the method).
In this way, you can be sure both the input and output are what is desired.
That probably still won't be enough so you will have to look at the code of the method line by line and look for large sections that you want to make assertions about. If you have an If statement, you should check for some conditions before and after it.
You won't need any mock objects if you know how to check if the arguments to the object are valid and you know what range of outputs are desired.
Your tests are too complicated.
You should test aspects of your class rather than writing a unittest for each member of yor class. A unittest should not cover the entire functionality of a member.
I'm going to guess that each test for Run() set expectations on every method they call on the mocks, even if that test doesn't focus on checking every such method invocation. I strongly recommend you Google "mocks aren't stubs" and read Fowler's article.
Also, 50 lines of code is pretty complex. How many codepaths through the method? 20+? You might benefit from a higher level of abstraction. I'd need to see code to judge more certainly.

Does YAGNI also apply when writing tests?

When I write code I only write the functions I need as I need them.
Does this approach also apply to writing tests?
Should I write a test in advance for every use-case I can think of just to play it safe or should I only write tests for a use-case as I come upon it?
I think that when you write a method you should test both expected and potential error paths. This doesn't mean that you should expand your design to encompass every potential use -- leave that for when it's needed, but you should make sure that your tests have defined the expected behavior in the face of invalid parameters or other conditions.
YAGNI, as I understand it, means that you shouldn't develop features that are not yet needed. In that sense, you shouldn't write a test that drives you to develop code that's not needed. I suspect, though, that's not what you are asking about.
In this context I'd be more concerned with whether you should write tests that cover unexpected uses -- for example, errors due passing null or out of range parameters -- or repeating tests that only differ with respect to the data, not the functionality. In the former case, as I indicated above, I would say yes. Your tests will document the expected behavior of your method in the face of errors. This is important information to people who use your method.
In the latter case, I'm less able to give you a definitive answer. You certainly want your tests to remain DRY -- don't write a test that simply repeats another test even if it has different data. Alternatively, you may not discover potential design issues unless you exercise the edge cases of your data. A simple example is a method that computes a sum of two integers: what happens if you pass it maxint as both parameters? If you only have one test, then you may miss this behavior. Obviously, this is related to the previous point. Only you can be sure when a test is really needed or not.
Yes YAGNI absolutely applies to writing tests.
As an example, I, for one, do not write tests to check any Properties. I assume that properties work a certain way, and until I come to one that does something different from the norm, I won't have tests for them.
You should always consider the validity of writing any test. If there is no clear benefit to you in writing the test, then I would advise that you don't. However, this is clearly very subjective, since what you might think is not worth it someone else could think is very worth the effort.
Also, would I write tests to validate input? Absolutely. However, I would do it to a point. Say you have a function with 3 parameters that are ints and it returns a double. How many tests are you going to write around that function. I would use YAGNI here to determine which tests are going to get you a good ROI, and which are useless.
Write the test as you need it. Tests are code. Writing a bunch of (initially failing) tests up front breaks the red/fix/green cycle of TDD, and makes it harder to identify valid failures vs. unwritten code.
You should write the tests for the use cases you are going to implement during this phase of development.
This gives the following benefits:
Your tests help define the functionality of this phase.
You know when you've completed this phase because all of your tests pass.
You should write tests that cover all your code, ideally. Otherwise, the rest of your tests lose value, and you will in the end debug that piece of code repeatedly.
So, no. YAGNI does not include tests :)
There is of course no point in writing tests for use cases you're not sure will get implemented at all - that much should be obvious to anyone.
For use cases you know will get implemented, test cases are subject to diminishing returns, i.e. trying to cover each and every possible obscure corner case is not a useful goal when you can cover all important and critical paths with half the work - assuming, of course, that the cost of overlooking a rarely occurring error is endurable; I would certainly not settle for anything less than 100% code and branch coverage when writing avionics software.
You'll probably get some variance here, but generally, the goal of writing tests (to me) is to ensure that all your code is functioning as it should, without side effects, in a predictable fashion and without defects. In my mind, then, the approach you discuss of only writing tests for use cases as they are come upon does you no real good, and may in fact cause harm.
What if the particular use case for the unit under test that you ignore causes a serious defect in the final software? Has the time spent developing tests bought you anything in this scenario beyond a false sense of security?
(For the record, this is one of the issues I have with using code coverage to "measure" test quality -- it's a measurement that, if low, may give an indication that you're not testing enough, but if high, should not be used to assume that you are rock-solid. Get the common cases tested, the edge cases tested, then consider all the ifs, ands and buts of the unit and test them, too.)
Mild Update
I should note that I'm coming from possibly a different perspective than many here. I often find that I'm writing library-style code, that is, code which will be reused in multiple projects, for multiple different clients. As a result, it is generally impossible for me to say with any certainty that certain use cases simply won't happen. The best I can do is either document that they're not expected (and hence may require updating the tests afterward), or -- and this is my preference :) -- just writing the tests. I often find option #2 is for more livable on a day-to-day basis, simply because I have much more confidence when I'm reusing component X in new application Y. And confidence, in my mind, is what automated testing is all about.
You should certainly hold off writing test cases for functionality you're not going to implement yet. Tests should only be written for existing functionality or functionality you're about to put in.
However, use cases are not the same as functionality. You only need to test the valid use cases that you've identified, but there's going to be a lot of other things that might happen, and you want to make sure those inputs get a reasonable response (which could well be an error message).
Obviously, you aren't going to get all the possible use cases; if you could, there'd be no need to worry about computer security. You should get at least the more plausible ones, and as problems come up you should add them to the use cases to test.
I think the answer here is, as it is in so many places, it depends. If the contract that a function presents states that it does X, and I see that it's got associated unit tests, etc., I'm inclined to think it's a well-tested unit and use it as such, even if I don't use it that exact way elsewhere. If that particular usage pattern is untested, then I might get confusing or hard-to-trace errors. For this reason, I think a test should cover all (or most) of the defined, documented behavior of a unit.
If you choose to test more incrementally, I might add to the doc comments that the function is "only tested for [certain kinds of input], results for other inputs are undefined".
I frequently find myself writing tests, TDD, for cases that I don't expect the normal program flow to invoke. The "fake it 'til you make it" approach has me starting, generally, with a null input - just enough to have an idea in mind of what the function call should look like, what types its parameters will have and what type it will return. To be clear, I won't just send null to the function in my test; I'll initialize a typed variable to hold the null value; that way when Eclipse's Quick Fix creates the function for me, it already has the right type. But it's not uncommon that I won't expect the program normally to send a null to the function. So, arguably, I'm writing a test that I AGN. But if I start with values, sometimes it's too big a chunk. I'm both designing the API and pushing its real implementation from the beginning. So, by starting slow and faking it 'til I make it, sometimes I write tests for cases I don't expect to see in production code.
If you're working in a TDD or XP style, you won't be writing anything "in advance" as you say, you'll be working on a very precise bit of functionality at any given moment, so you'll be writing all the necessary tests in order make sure that bit of functionality works as you intend it to.
Test code is similar with "code" itself, you won't be writing code in advance for every use cases your app has, so why would you write test code in advance ?

How deep are your unit tests?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
The thing I've found about TDD is that its takes time to get your tests set up and being naturally lazy I always want to write as little code as possible. The first thing I seem do is test my constructor has set all the properties but is this overkill?
My question is to what level of granularity do you write you unit tests at?
..and is there a case of testing too much?
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.
Write unit tests for things you expect to break, and for edge cases. After that, test cases should be added as bug reports come in - before writing the fix for the bug. The developer can then be confident that:
The bug is fixed;
The bug won't reappear.
Per the comment attached - I guess this approach to writing unit tests could cause problems, if lots of bugs are, over time, discovered in a given class. This is probably where discretion is helpful - adding unit tests only for bugs that are likely to re-occur, or where their re-occurrence would cause serious problems. I've found that a measure of integration testing in unit tests can be helpful in these scenarios - testing code higher up codepaths can cover the codepaths lower down.
Everything should be made as simple as
possible, but not simpler. - A. Einstein
One of the most misunderstood things about TDD is the first word in it. Test. That's why BDD came along. Because people didn't really understand that the first D was the important one, namely Driven. We all tend to think a little bit to much about the Testing, and a little bit to little about the driving of design. And I guess that this is a vague answer to your question, but you should probably consider how to drive your code, instead of what you actually are testing; that is something a Coverage-tool can help you with. Design is a quite bigger and more problematic issue.
To those who propose testing "everything": realise that "fully testing" a method like int square(int x) requires about 4 billion test cases in common languages and typical environments.
In fact, it's even worse than that: a method void setX(int newX) is also obliged not to alter the values of any other members besides x -- are you testing that obj.y, obj.z, etc. all remain unchanged after calling obj.setX(42);?
It's only practical to test a subset of "everything." Once you accept this, it becomes more palatable to consider not testing incredibly basic behaviour. Every programmer has a probability distribution of bug locations; the smart approach is to focus your energy on testing regions where you estimate the bug probability to be high.
The classic answer is "test anything that could possibly break". I interpret that as meaning that testing setters and getters that don't do anything except set or get is probably too much testing, no need to take the time. Unless your IDE writes those for you, then you might as well.
If your constructor not setting properties could lead to errors later, then testing that they are set is not overkill.
I write tests to cover the assumptions of the classes I will write. The tests enforce the requirements. Essentially, if x can never be 3, for example, I'm going to ensure there is a test that covers that requirement.
Invariably, if I don't write a test to cover a condition, it'll crop up later during "human" testing. I'll certainly write one then, but I'd rather catch them early. I think the point is that testing is tedious (perhaps) but necessary. I write enough tests to be complete but no more than that.
Part of the problem with skipping simple tests now is in the future refactoring could make that simple property very complicated with lots of logic. I think the best idea is that you can use Tests to verify requirements for the module. If when you pass X you should get Y back, then that's what you want to test. Then when you change the code later on, you can verify that X gives you Y, and you can add a test for A gives you B, when that requirement is added later on.
I've found that the time I spend during initial development writing tests pays off in the first or second bug fix. The ability to pick up code you haven't looked at in 3 months and be reasonably sure your fix covers all the cases, and "probably" doesn't break anything is hugely valuable. You also will find that unit tests will help triage bugs well beyond the stack trace, etc. Seeing how individual pieces of the app work and fail gives huge insight into why they work or fail as a whole.
In most instances, I'd say, if there is logic there, test it. This includes constructors and properties, especially when more than one thing gets set in the property.
With respect to too much testing, it's debatable. Some would say that everything should be tested for robustness, others say that for efficient testing, only things that might break (i.e. logic) should be tested.
I'd lean more toward the second camp, just from personal experience, but if somebody did decide to test everything, I wouldn't say it was too much... a little overkill maybe for me, but not too much for them.
So, No - I would say there isn't such a thing as "too much" testing in the general sense, only for individuals.
Test Driven Development means that you stop coding when all your tests pass.
If you have no test for a property, then why should you implement it? If you do not test/define the expected behaviour in case of an "illegal" assignment, what should the property do?
Therefore I'm totally for testing every behaviour a class should exhibit. Including "primitive" properties.
To make this testing easier, I created a simple NUnit TestFixture that provides extension points for setting/getting the value and takes lists of valid and invalid values and has a single test to check whether the property works right. Testing a single property could look like this:
[TestFixture]
public class Test_MyObject_SomeProperty : PropertyTest<int>
{
private MyObject obj = null;
public override void SetUp() { obj = new MyObject(); }
public override void TearDown() { obj = null; }
public override int Get() { return obj.SomeProperty; }
public override Set(int value) { obj.SomeProperty = value; }
public override IEnumerable<int> SomeValidValues() { return new List() { 1,3,5,7 }; }
public override IEnumerable<int> SomeInvalidValues() { return new List() { 2,4,6 }; }
}
Using lambdas and attributes this might even be written more compactly. I gather MBUnit has even some native support for things like that. The point though is that the above code captures the intent of the property.
P.S.: Probably the PropertyTest should also have a way of checking that other properties on the object didn't change. Hmm .. back to the drawing board.
I make unit test to reach the maximum feasible coverage. If I cannot reach some code, I refactor until the coverage is as full as possible
After finished to blinding writing test, I usually write one test case reproducing each bug
I'm used to separate between code testing and integration testing. During integration testing, (which are also unit test but on groups of components, so not exactly what for unit test are for) I'll test for the requirements to be implemented correctly.
So the more I drive my programming by writing tests, the less I worry about the level of granuality of the testing. Looking back it seems I am doing the simplest thing possible to achieve my goal of validating behaviour. This means I am generating a layer of confidence that my code is doing what I ask to do, however this is not considered as absolute guarantee that my code is bug free. I feel that the correct balance is to test standard behaviour and maybe an edge case or two then move on to the next part of my design.
I accept that this will not cover all bugs and use other traditional testing methods to capture these.
Generally, I start small, with inputs and outputs that I know must work. Then, as I fix bugs, I add more tests to ensure the things I've fixed are tested. It's organic, and works well for me.
Can you test too much? Probably, but it's probably better to err on the side of caution in general, though it'll depend on how mission-critical your application is.
I think you must test everything in your "core" of your business logic. Getter ans Setter too because they could accept negative value or null value that you might do not want to accept. If you have time (always depend of your boss) it's good to test other business logic and all controller that call these object (you go from unit test to integration test slowly).
I don't unit tests simple setter/getter methods that have no side effects. But I do unit test every other public method. I try to create tests for all the boundary conditions in my algorthims and check the coverage of my unit tests.
Its a lot of work but I think its worth it. I would rather write code (even testing code) than step through code in a debugger. I find the code-build-deploy-debug cycle very time consuming and the more exhaustive the unit tests I have integrated into my build the less time I spend going through that code-build-deploy-debug cycle.
You didn't say why architecture you are coding too. But for Java I use Maven 2, JUnit, DbUnit, Cobertura, & EasyMock.
The more I read about it the more I think some unit tests are just like some patterns: A smell of insufficient languages.
When you need to test whether your trivial getter actually returns the right value, it is because you may intermix getter name and member variable name. Enter 'attr_reader :name' of ruby, and this can't happen any more. Just not possible in java.
If your getter ever gets nontrivial you can still add a test for it then.
Test the source code that makes you worried about it.
Is not useful to test portions of code in which you are very very confident with, as long as you don't make mistakes in it.
Test bugfixes, so that it is the first and last time you fix a bug.
Test to get confidence of obscure code portions, so that you create knowledge.
Test before heavy and medium refactoring, so that you don't break existing features.
This answer is more for figuring out how many unit tests to use for a given method you know you want to unit test due to its criticality/importance. Using Basis Path Testing technique by McCabe, you could do the following to quantitatively have better code coverage confidence than simple "statement coverage" or "branch coverage":
Determine Cyclomatic Complexity value of your method that you want to unit test (Visual Studio 2010 Ultimate for example can calculate this for you with static analysis tools; otherwise, you can calculate it by hand via flowgraph method - http://users.csc.calpoly.edu/~jdalbey/206/Lectures/BasisPathTutorial/index.html)
List the basis set of independent paths that flow thru your method - see link above for flowgraph example
Prepare unit tests for each independent basis path determined in step 2