Suggestion for JUnit testing - unit-testing

All,
While writing a test method for method A (which has many internal conditions), should I focus on testing a single condition each time? I am finding it difficult to structure a test method for method A that would cover all the code path in method A.
Can anyone please suggest me how to go about writing a test method.?

Do not feel the need to have one-test-per-method. Keep your unit tests fine-grained, descriptive, and easy to understand. If that means multiple, similar tests all calling the same target method, then do it.
For that matter, try and avoid the habit of systematically writing a unit test for each method. Unit testing should be well thought-out, not habitual. They should describe the behaviour of classes, rather than of individual methods.

you should write a test for every execution path. For example, if you have
if (cond1 || cond2) {....}
you should test for cond1 and cond2. Separate test methods are fine and encouraged, if it makes sense. Its ok to have
testMyMethodCond1(){...}
and
testMyMethodCond2(){...}
and whatever else you need.
Also, when you say your method has 'many internal conditions', maybe you want to refactor your code so some of those conditions are handled in other, smaller methods, that are easier to test.

My preference is to have one failure point per JUnit test method. This means that I will have many JUnit test methods per target class method that I'm testing.
In your example, I may have testA1(), testA2(), testA3() all testing the same method (A). Each of these would test a different success or failure condition of method A.
If there are 8 paths through method A, then you need at least 8 test methods calling it and maybe some for error handling conditions.

First of all, you should consider breaking your method into several methods if it has more than one responsibility. Secondly, I would advice you to write multiple tests for each method. Each test should cover a specific path through the method under testing. Each test should also (of course) give the expected outcome depending on your test data.

Tests should be simpler, preferably much simpler, than the thing tested. Otherwise the error is more likely to be in your test. So it's much better to have a lot of small simple test methods that execute small parts of your method than one big clunky one. (You can use a code coverage tool like Cobertura to verify that you're covering all paths in your method.)

Related

Is it good practice to unit test properties?

I'm still getting to grips with the whole TDD concept. Should I be writing tests for properties? Should one only write tests on properties containing sufficient logic? Any thoughts or examples on this would be great.
Writing a test for public getter of a private field will not give you much if this getter does nothing except returning your private field. But if it does contain some logic (or just something that can fail, like converting your private Int32 field to Byte), testing such property starts to make sense.
Test things that have a reasonable chance of failure - you gain no extra confidence by testing properties with no logic beyond get/set.
Simple rule of thumb I use when doing TDD: always write tests that fail.
If a test fails at first it's a good test for TDD. It means something is not yet implemented, or not implemented as it should be. Then you can change code to make it pass. A test that succeeds at first is a bad test. You don't even know if it succeeded because you made some mistake writing the test, or because what you are testing is already working.
If you are able to produce a test failure using properties, then write tests for those properties. Typically you should begin by writing a test testing a setter or a getter before implementing it. A not implemented setter or getter can look trivial but makes test fail. And why would you write any line of code even a setter or getter if it's not driven by a test failure ?
Other kinds of tests, like tests as documentation showing how to use an API are also very useful, and good agile practice, but that is not TDD. TDD is about trying actively to break the code until you can't any more. Then you run functional tests and if you pushed unit tests hard enough, and if there is not some integration or system problem going into the way, all should be fine.
I usually write junit tests for accessors. It doesn't add much when they're written, except to keep coverage statistics pretty. But if someone adds "sufficient logic" later to the production code, the tests will already be in place to catch any mistakes.
Also it is the work of moments to write a test to check the value returned by a getter.
Instead of thinking of them as tests, think of each test as an example of how someone could use your code.
Instead of just testing properties, think about the behaviour which changes when the properties have different values, and give an example of the behaviour of the class in each meaningful context.
If it's really just a data property you can test by inspection, or with automated acceptance tests, or manually, maybe with a tester's help. Otherwise, don't worry about testing each method, or each property - just show how you can use the code and how you expect it to behave.
Think of TDD and Unit testes as a way to "see ahead", you write the test in a way that you feel would be best for your public signature, of your classes, to be.

Am I doing something fundamentally wrong in my unit tests?

After reading an interesting article about unit testing behavior instead of state, I came to realize that my unit tests often are tightly coupled to my code because I am using mocks.
I cannot image writing unit tests without mocks but the fact is that these mocks are coupling my unit test very much to my code because of the expect andReturn calls.
For example when I create a test that uses a mock, I record all calls to the specific mock and assign return values.
Now when I change the implementation of the actual code for whatever reason, a lot of tests break because that call was not expected by the mock, forcing me to update the unit test also, and effectively forcing me to implement every change twice...
This happens a lot.
Is this issue intrinsic to using mocks, and should I learn to live with it, or am I doing something fundamentally wrong?
Please enlighten me :)
Clear examples coming with the explanation are most welcome of course.
when I create a test that uses a mock,
I record all calls to the specific
mock and assign return values
It sounds like you may be over-specifying expectations.
Try to build as little setup code as possible into your tests: stub (rather than expect) all behavior that doesn't pertain to the current test and only specify return values that are absolutely needed to make your test work.
This answer includes a concise example (as well as an alternative, more detailed explanation).
My experience is to use mocks only at the bounderies of (sub)systems. If I have two classes that are strongly related I do not mock them appart but test them together. An example might be an composite and a visitor. If I test a concrete visitor I do not use a mock for the composite but create real composites. One might argue that this is not a unit test (depends on the definition of what is a unit). But that doesn't matter that much. What I try to achieve is:
Write readable tests (tests without mocks are most of the time more easy to read).
Test only a focused area of code (in the example the concreate visitor and the relevant part of the composite).
Write fast tests (as long as I instantiate only a few classes, in the example the concrete composites, this is not a concern ... watch for transitive creations).
Only if I encounter the boundary of a subsystem I use mocks. Example: I have a composite that can render itself to a renderer I would mock out the renderer if I test the render logic of the composite.
Testing behavior instead of state looks promosing at first but in general I would test state as the resulting tests are easiear to maintain. Mocks are a cannon. Don't crack a nut with a sledgehammer.
If you are fixing the tests because they break, you are not using them as intended.
If the behaviour of a method changes, in test driven development you would first change the test to expect the new behaviour, then implement the new behaviour.
Several good answers here already, but for me a good rule of thumb is to test the requirements of the method, not the implementation. Sometimes that may mean using a mock object because the interaction is the requirement, but you're usually better off testing the return value of the method or the change in state of the object.

TDD - top level function has too many mocks. Should I even bother testing it?

I have a .NET application with a web front-end, WCF Windows service back-end. The application is fairly simple - it takes some user input, sending it to the service. The service does this - takes the input (Excel spreadsheet), extracts the data items, checks SQL DB to make sure the items are not already existing - if they do not exist, we make a real-time request to a third party data vendor and retrieve the results, inserting them into the database. It does some logging along the way.
I have a Job class with a single public ctor and public Run() method. The ctor takes all the params, and the Run() method does all of the above logic. Each logical piece of functionality is split into a separate class - IParser does file parsing, IConnection does the interaction with the data vendor, IDataAccess does the data access, etc. The Job class has private instances of these interfaces, and uses DI to construct the actual implementations by default, but allows the class user to inject any interface.
In the real code, I use the default ctor. In my unit tests for the Run() method, I use all mock objects creating via NMock2.0. This Run() method is essentially the 'top level' function of this application.
Now here's my issue / question: the unit tests for this Run() method are crazy. I have three mock objects I'm sending into the ctor, and each mock object sets expectations on themselves. At the end I verify. I have a few different flows that the Run method can take, each flow having its own test - it could find everything is already in the database and not make a request to vendor... or an exception could be thrown and the job status could be set to 'failed'... OR we can have the case where we didn't have the data and needed to make the vendor request (so all those function calls would need to be made).
Now - before you yell at me and say 'your Run() method is too complicated!' - this Run method is only a mere 50 lines of code! (It does make calls to some private function; but the entire class is only 160 lines). Since all the 'real' logic is being done in the interfaces that are declared on this class. however, the biggest unit test on this function is 80 lines of code, with 13 calls to Expect.BLAH().. _
This makes re-factoring a huge pain. If I want to change this Run() method around, I have to go edit my three unit tests and add/remove/update Expect() calls. When I need to refactor, I end up spending more time creating my mock calls than I did actually writing the new code. And doing real TDD on this function makes it even more difficult if not impossible. It's making me think that it's not even worth unit testing this top level function at all, since really this class isn't doing much logic, it's just passing around data to its composite objects (which are all fully unit tested and don't require mocking).
So - should I even bother testing this high level function? And what am I gaining by doing this? Or am I completely misusing mock/stub objects here? Perhaps I should scrap the unit tests on this class, and instead just make an automated integration test, which uses the real implementations of the objects and Asserts() against SQL Queries to make sure the right end-state data exists? What am I missing here?
EDIT: Here's the code - the first function is the actual Run() method - then my five tests which test all five possible code paths. I changed it some for NDA reasons but the general concept is still there. Anything you see wrong with how I'm testing this function, any suggestions on what to change to make it better? Thanks.
I guess my advice echos most of what is posted here.
It sounds as if your Run method needs to be broken down more. If its design is forcing you into tests that are more complicated than it is, something is wrong. Remember this is TDD we're talking about, so your tests should dictate the design of your routine. If that means testing private functions, so be it. No technological philosophy or methodology should be so rigid that you can't do what feels right.
Additionally, I agree with some of the other posters, that your tests should be broken down into smaller segments. Ask yourself this, if you were going to be writting this app for the first time and your Run function didn't yet exist, what would your tests look like? That response is probably not what you have currently (otherwise you wouldn't be asking the question). :)
The one benefit you do have is that there isn't a lot of code in the class, so refactoring it shouldn't be very painful.
EDIT
Just saw you posted the code and had some thoughts (no particular order).
Way too much code (IMO) inside your SyncLock block. The general rule is to keep the code to a minimal inside a SyncLock. Does it ALL have to be locked?
Start breaking code out into functions that can be tested independently. Example: The ForLoop that removes ID's from the List(String) if they exist in the DB. Some might argue that the m_dao.BeginJob call should be in some sort of GetID function that can be tested.
Can any of the m_dao procedures be turned into functions that can tested on their own? I would assume that the m_dao class has it's own tests somewhere, but by looking at the code it appears that that might not be the case. They should, along with the functionality in the m_Parser class. That will relieve some of the burden of the Run tests.
If this were my code, my goal would be to get the code to a place where all the individual procedure calls inside Run are tested on their own and that the Run tests just test the final out come. Given input A, B, C: expect outcome X. Give input E, F, G: expect Y. The detail of how Run gets to X or Y is already tested in the other procedures' tests.
These were just my intial thoughts. I'm sure there are a bunch of different approaches one could take.
Two thoughts: first you should have an integration test anyway to make sure everything hangs together. Second, it sounds to me like you're missing intermediate objects. In my world, 50 lines is a long method. It's hard to say anything more precise without seeing the code.
The first thing I would try would be refactroing your unit tests to share the set up code between tests by refactoring to a method that sets up the mocks and expectations. Parameterize the method so your expectations are configurable. You may need one or perhaps more of these set up methods depending on how much alike the set up is from test to test.
So - should I even bother testing this
high level function?
Yes. If there are different code-paths, you should be.
And what am I gainging by doing this?
Or am I completely mis-using mock/stub
objects here?
As J.B. pointed out (Nice seeing you at AgileIndia2010!), Fowler's article is recommended read. As a gross simplification: Use Stubs, when you don't care about the values returned by the collaborators. If you the return values from the collaborator.call_method() changes the behavior(or you need non trivial checks on args, computation for return values), you need mocks.
Suggested refactorings:
Try moving the creation and injection of mocks into a common Setup method. Most unit testing frameworks support this; will be called before each test
Your LogMessage calls are beacons - calling out once again for intention revealing methods. e.g. SubmitBARRequest(). This will shorten your production code.
Try n move each Expect.Blah1(..) into intention revealing methods.
This will shorten your test code and make it immensely readable and easier to modify. e.g.
Replace all instances of
.
Expect.Once.On(mockDao) _
.Method("BeginJob") _
.With(New Object() {submittedBy, clientID, runDate, "Sent For Baring"}) _
.Will([Return].Value(0));
with
ExpectBeginJobOnDAO_AndReturnZero(); // you can name it better
on whether to test such function: you said in a comment
" the tests read just like the actual
function, and since im using mocks,
its only asserting the functions are
called and sent params (i can check
this by eyeballing the 50 line
function)"
imho eyeballing the function isn't enough, haven't you heard: "I can't believe I missed that!" ... you have a fair amount of scenarios that could go wrong in that Run method, covering that logic is a good idea.
on tests being brittle: try having some shared methods that you can use in the test class for the common scenarios. If you are concerned about a later change breaking all the tests, put the pieces that concerned you in specific methods that can be changed if needed.
on tests being too long / hard to know what's in there: don't test single scenarios with every single assertion that its related to it. Break it up, test stuff like it should log x messages when y happens (1 test), it should save to the db when y happens (another separate test), it should send a request to a third party when z happens (yet another test), etc.
on doing integration/system tests instead of these unit tests: you can see from your current situation that there are plenty of scenarios & little variations involved in that part of your system. That's with the shield of replacing yet more logic with those mocks & the ease of simulating different conditions. Doing the same with the whole thing will add a whole new level of complexity to your scenario, something that is surely unmanageable if you want to cover a wide set of scenarios.
imho you should minimize the combinations that you are leaving for your system tests, exercising some main scenarios should already tell you that a Lot of the system is working correctly - it should be a lot about everything being hooked correctly.
The above said, I do recommend adding focused integration tests for all the integration code you have that might not be currently covered by your tests / since by definition unit tests don't get there. This exercises specifically the integration code with all the variations you expect from it - the corresponding tests are much simpler than trying to reach those behaviors in the context of the whole system & tell you very quickly if any assumptions in those pieces is causing trouble.
If you think unit-tests are too hard, do this instead: add post-conditions to the Run method. Post-conditions are when you make an assertion about the code. For example, at the end of that method, you may want some variable to hold a particular value or one value out of some possible choices.
After, you can derive your pre-conditions for the method. This is basically the data type of each parameter and the limits and constraints on each of those parameters (and on any other variable initialized at the beginning of the method).
In this way, you can be sure both the input and output are what is desired.
That probably still won't be enough so you will have to look at the code of the method line by line and look for large sections that you want to make assertions about. If you have an If statement, you should check for some conditions before and after it.
You won't need any mock objects if you know how to check if the arguments to the object are valid and you know what range of outputs are desired.
Your tests are too complicated.
You should test aspects of your class rather than writing a unittest for each member of yor class. A unittest should not cover the entire functionality of a member.
I'm going to guess that each test for Run() set expectations on every method they call on the mocks, even if that test doesn't focus on checking every such method invocation. I strongly recommend you Google "mocks aren't stubs" and read Fowler's article.
Also, 50 lines of code is pretty complex. How many codepaths through the method? 20+? You might benefit from a higher level of abstraction. I'd need to see code to judge more certainly.

Unit testing: Is it a good practice to have assertions in setup methods?

In unit testing, the setup method is used to create the objects needed for testing.
In those setup methods, I like using assertions: I know what values I want to see in those
objects, and I like to document that knowledge via an assertion.
In a recent post on unit tests calling other unit tests here on stackoverflow, the general feeling seems to be that unit tests should not call other tests:
The answer to that question seems to be that you should refactor your setup, so
that test cases do not depend on each other.
But there isn't much difference in a "setup-with-asserts" and a
unit test calling other unit tests.
Hence my question: Is it good practice to have assertions in setup methods?
EDIT:
The answer turns out to be: this is not a good practice in general. If the setup results need to be tested, it is recommended to add a separate test method with the assertions (the answer I ticked); for documenting intent, consider using Java asserts.
Instead of assertions in the setup to check the result, I used a simple test (a test method along the others, but positionned as first test method).
I have seen several advantages:
The setup keeps short and focused, for readability.
The assertions are run only once, which is more efficient.
Usage and discussion :
For example, I name the method testSetup().
To use it, when I have some test errors in that class, I know that if testSetup() has an error, I don't need to bother with the other errors, I need to fix this one first.
If someone is bothered by this, and wants to make this dependency explicit, the testSetup() could be called in the setup() method. But I don't think it matters. My point is that, in JUnit, you can already have something similar in the rest of your tests:
some tests that test local code,
and some tests that is calls more global code, which indirectly calls the same code as the previous test.
When you read the test result where both fail, you already have to take care of this dependency that is not in the test, but in the code being called. You have to fix the simple test first, and then rerun the global test to see if it still fails.
This is the reason why I'm not bothered by the implicit dependency I explained before.
Having assertions in the Setup/TearDown methods is not advisable. It makes the test less readable if the user needs to "understand" that some of the test logic is not in the test method.
There are times when you do not have a choice but to use the setup/teardown methods for something other than what they where intended for.
There is a bigger issue in this question: a test that calls another test, it is a smell for some problem in your tests.
Each test should test a specific aspect of your code and should only have one or two assertions in it, so if your test calls another test you might be testing too many things in that test.
For more information read: Unit Testing: One Test, One Assertion - Why It Works
They're different scenarios; I don't see the similarity.
Setup methods should contain code that is common to (ideally) all tests in a fixture. As such, there's nothing inherently wrong with putting asserts in a test setup method if certain things must be true before the rest of the test code executes. The setup is an extension of the test; it is part of the test as a whole. If the assert trips, people will discover which pre-requisite failed.
On the other hand, if the setup is complicated enough that you feel the need to assert it is correct, it may be a warning sign. Furthermore, if all tests do not require the setup's full output, then it is a sign that the fixture has poor cohesion and should be split up based on scenarios and/or refactored.
It's partly because of this that I tend to stay away from using Setup methods. Where possible, I use private factory methods or similar to set things up. It makes the test more readable and avoids confusion. Sometimes this is not practical (e.g. working with tightly coupled classes and/or when writing integration tests), but for the majority of my tests it does the job.
Follow your heart / Blink decisions. Asserts within a Setup method can document intent ; improver readability. So personally I'd back you up on this.
It is different from a test calling other tests - which is bad. No test isolation. A test should not influence the outcome of another test.
Although it is not a freq use-case, I sometimes use Asserts inside a Setup method so that I can know if test setup has not taken place as I intended it to; usually when I'm dealing with components that I didn't write myself. An Assertion failure which reads 'Setup failed!' in the errors tab - quickly helps me zone in on the setup code instead of having to look at a bunch of failed tests.
A Setup failure usually should cause all tests in that fixture to fail - which is a smell that your nose should soon pickup. 'All tests failed usually implies Setup broke ' So assertions are not always needed. That said be pragmatic, look at your specific context and 'Add to taste.'
I use Java asserts, rather than JUnit ones, in the cases where something like this is necessary. e.g. when you use some other utility class to set up test data.:
byte[] pkt = pktFactory.makePacket(TIME, 12, "23, F2");
assert pkt.length == 15;
Failing has the implication 'system is not in a state to even try to run this test'.

Is unit-testing of accessors a must?

For classes that have several setters and getters besides other methods, is it reasonable to save time on writing unit tests for the accessors, taking into account that they will be called while testing the rest of the interface anyway?
I would only unit test them if they do more than set or return a variable. At some point, you need to trust that the compiler is going to generate the right program for you.
Absolutely. The idea of unit tests is to ensure that changes do not affect behavior in unknown ways. You might save some time by not writing a test for getFoo(). If you change the type of Foo to be something a little more complex then you could easily forget to test the accessor. If you are questioning whether you should write a test or not, you are better off writing it.
IMHO, if you are thinking about skipping adding tests for a method, you might want to ask yourself if the method is necessary or not. In interest of full disclosure, I am one of those people that only adds a setter or getter when it is proven necessary. You would be surprised how often you really don't need access to a specific member after construction or when you only want to give access to the result of some calculation that is really dependent on the member. But I digress.
A good mantra is to always add tests. If you don't think that you need one because the method is trivial, consider removing the method instead. I realize that the common advice is that it is okay to skip tests for "trivial" methods but you have to ask yourself if the method is even necessary. If you skip the test, you are saying that the method will always be trivial. Remember that unit tests also function as documentation of the intent and the contract offered. Hence tests of a trivial method state that the method is indeed meant to be trivial.
My criteria for testing is that every piece of code containing conditional logic (while, if, for, etc) be tested. If the accessors are simple getters/setters, I'd say testing them is wasting your time.
You don't have to write test for properties that contain no logic.
The only explanation to test simple properties is to boost test coverage - but it's just silly.
I think it's reasonable to save time and not write unit tests that you don't think will be particularly helpful.
While 100% test coverage is an admirable ideal, at some point you run into diminishing returns where the time you spent writing the test isn't worth the benefit you get out of having it.
You can always go back and add more unit tests later if you find situations where you decide they would be useful.
Our company has both kinds of people and opinions. I'm tending to not testing them specifically, as they are usually
automatically generated
tested in the context of another test (e.g. there's some other code making use of these accessors
not containing any code that might break
There are exceptions though:
When they are not simply generated 'getters' and 'setters'
When they are part of an important API that's just provided for other users and not really tested in the context you're currently in
Both these cases might cause me to test them. The first one more than the second.
No-friggin way!
Waste of time!
Even Bob Martin See SO podcast 41, the grandfather of Agile says no.
If your IDE generates and manages modifications for member accessors --- you wont' be doing anything special --- then testing them really isn't important; types will match up, naming will be by a template, etc.
I think most people will say testing them is a waste of your time. In the 99% case that is true. If there's a bug in an accessor and the rest of your unit tests don't catch it indirectly then I'd start questioning why that property is there at all.
On the other hand, testing an accessor takes less typing that asking this question :)
Personally I test them. But this is a gray area for me and I don't press other people in my group to test them as long as they have sufficient coverage around the functionality of the class.
Usually when I consider writing unit tests I ask myself the following:
Is the getter/setter accessing anything on the DAL (Data Access Layer)?
If so then I would include a unit test. Just in case because if at some point in the future you decide to implement lazy loading, or something more advanced than a simple get/set, then you'll need to make sure this is working properly.
Is it forseable that the getter/setter will throw an exception?
The best practice for getters is to not allow them to throw exceptions at all. Setter's are another matter. However, either way, if you decide that a property might possibly throw an exception, then write a unit test for that property, both for a successful access, and for purposefully generating the exception.
Other than that I wouldn't bother, as Dan pointed out, "At some point, you need to trust that the compiler is going to generate the right program for you."
I like to have unit tests for them. If an accessor does any kind of work besides simply return a field then that code will be tested appropriately.
Even if a given accessor doesn't do anything other than return a field, it might be modified later to do something extra.
Also, it's an easy way to up the number of tests being run, which many managers like.