I just discovered that I was getting twice the number of tests run that I should've been getting. Discovered it when a test broke and I got two identical test failures. Same test, same everything. Got me quite confused, but managed to narrow it down to a certain test class that was a partial class.
The reason it was a partial class was that I had split a test class in two, just to make it a bit more clean. The class under test had a certain method that required a long range of tests and I thought it would be cleaner to have those in a separate file. But since there was one or two helper methods used I figured I could just make the class partial so both files still had access to those methods.
The test framework is NUnit and the tests was run by using TestDriven.Net. Ran the tests both from inside a single test method (reported two tests passed instead of one), on the class (got twice the number of tests passed) and on the whole test project.
Managed to fix the issue by making the classes not partial and just duplicating those tiny helper methods (might move those to a separate helper class or something later).
Now... why on earth is this happening? I thought partial classes were compiled into a single class? Is this an issue with partial classes in general, NUnit, Test-Driven.net or something completely different?
You probably put the [TestFixture] attribute in both files of the partial class. This will cause TestFixture to be emitted twice in the IL class definition and NUnit will run the same test code twice. You should only add [TestFixture] in one of the files for your partial class.
Related
Just having a conversation with someone in the office about using a business logic class to build up some data in order to test another class.
Basically, he has class A which takes a complex type as a parameter and then generates a collection of a different complex type as a result. He's written tests around this class already. Now he's moved on to testing another class (class B) which takes the result of class A then performs some logic on it.
He's asked the question, "should I use class A to build up a scenario to test class B with".
At first I said yes as class A has tests around it. But then I figured well what if there's some bugs in class A that we haven't found yet... so I guess there must be a better way to address this scenario.
Does anyone have any thoughts on this? Is it OK to use existing logic to save time writing other tests?
Regards,
James
Stances on that might differ. Generally, if code is tested and you assume it works, you're free to use it. This is especially true when using already tested methods to help with testing others (within single class/unit). However, as this happens within single unit it's a bit different from your case.
Now, when dealing with 2 separate classes I'd say you should avoid such approach. Purely for the reason that those two classes might not be related in obvious way or their context/scope of usage might vastly differ. As in, somebody might change class A without even knowing class B exists. Class B tests suddenly break even though no changes were made to B code. This brings unnecessary confusion and is situation you usually don't want to find yourself in.
Instead, I suggest creating helper method within tested class B file. With stubs/fakes and tools like AutoFixture, you should be able to easily reproduce generation logic used by class A and have your own "copy" contained in class B tests.
In order to test class B, the result returned from the class A should be replicated somewhere in your test. If class A returns a list of persons, you will have an helper function in your test returning a fake List<Person> to use for your test.
Because you are only testing the class B, class A should not be used in your test.
NUnit provides a built in functionality in order to provide data for your tests, have a look to :
http://www.nunit.org/index.php?p=testCaseSource&r=2.5
Or you can simply create a DataFactory class with methods that returns the data (simple objects, collections etc..) you will consume in your tests.
I have a PersonDao that I'm writing unit tests against.
There are about 18-20 methods in PersonDao of the form -
getAllPersons()
getAllPersonsByCategory()
getAllPersonsUnder21() etc
My Approach to testing this was to create a PersonDaoTest with about 18 test methods testing each of the method in PersonDao
Then I created a PersonDaoPaginationTest that tested these 18 methods by applying pagination parameters.
Is this in anyway against the TDD best practices? I was told that this creates confusion and is against the best practices since this is non-standard. What was suggested is merging the two classes into PersonDaoTest instead.
As I understand is, the more broken down into many classes your code is, the better, please comment.
The fact that you have a set of 18 tests that you are going to have to duplicate to test a new feature is a smell that suggests that your PersonDao class is taking on multiple responsibilities. Specifically, it appears to be responsible both for querying/filter and for pagination. You may want to take a look at whether you can do a bit of design work to extract the pagination functionality into a separate class which could then be tested independently.
But in answer to your question, if you find that you have a class that you want to remain complex, then it's perfectly fine to use multiple test classes as a way of organizing a large number of tests. #Gishu's answer of grouping tests by their setup is a good approach. #Ryan's answer of grouping by "facets" or features is another good approach.
Can't give you a sweeping answer without looking at the code... except use whatever seems coherent to you and your team.
I've found that grouping tests based on their setup works out nicely in most cases. i.e if 5 tests require the same setup, they usually fit nicely into a test-fixture. if the 6th test requires a different setup (more or less) break it out into a separate test fixture.
This also leads to test-fixtures that are feature-cohesive (i.e. tests grouped on feature), give it a try. I'm not aware of any best practice that says you need to have one test class per production class... in practice I find I have n test classes per production classes, the best practice would be to use good names and keep related tests close (in a named folder).
My 2 cents: when you have a large class like that that has different "facets" to it, like pagination, I find it can often make for more understandable tests to not pack them all into one class. I can't claim to be a TDD guru, but I practice test-first development religiously, so to speak. I don't do it often, but it's not exactly rare, either, that I'll write more than a single test class for a particular class. Many people seem to forget good coding practices like separation of concerns when writing tests, though. I'm not sure why.
I think one test class per class is fine - if your implementation has many methods, then your test class will have many methods - big deal.
You may consider a couple of things however:
Your methods seem a bit "overly specific" and could use some abstraction or generalisation, for example instead of getAllPersonsUnder21() consider getAllPersonsUnder(int age)
If there are some more general aspects of your class, consider testing them using some common test code using call backs. For a trivial example to illustrate testing that both getAllPersons() returns multiple hits, do this:
#Test
public void testGetAllPersons() {
assertMultipleHits(new Callable<List<?>> () {
public List<?> call() throws Exception {
return myClass.getAllPersons(); // Your call back is here
}
});
}
public static void assertMultipleHits(Callable<List<?>> methodWrapper) throws Exception {
assertTrue("failure to get multiple items", methodWrapper.call().size() > 0);
}
This static method can be used by any class to test if "some method" returns multiple hits. You could extends this to do lots of tests over the same callback, for example running it with and without a DB connection up, testing that it behaves correctly in each case.
I'm working on test automation of a web app using selenium. It is not unit testing but you might find that some principles apply. Tests are very complex and we figured out that the only way to implement tests in a way that meets all our requirements was having 1 test per class. So we consider that each class is an individual test, then, we were able to use methods as the different steps of the test. For example:
public SignUpTest()
{
public SignUpTest(Map<String,Object> data){}
public void step_openSignUpPage(){}
public void step_fillForm(){}
public void step_submitForm(){}
public void step_verifySignUpWasSuccessfull(){}
}
All the steps are dependent, they follow the order specified and if someone fail the others will not be executed.
Of course, each step is a test by itself, but they all together form the sing up test.
The requirements were something like:
Tests must be data driven, this is, execute the same test in parallel with different inputs.
Tests must run in different browsers in parallel as well. So each
test will run "input_size x browsers_count" times in parallel.
Tests will focus in a web workflow, for example, "sign up with valid data" and they will be split into smaller tests units for each step of the workflow. It will make things easier to
maintain, and debug (when you have a failure, it will say:
SignUpTest.step_fillForm() and you'll know immediately what's wrong).
Tests steps share the same test input and state (for example, the id of the user created). Imagine if you put in the same class
steps of different tests, for example:
public SignUpTest()
{
public void signUpTest_step_openSignUpPage(){}
public void signUpTest_step_step_fillForm(){}
public void signUpTest_step_step_submitForm(){}
public void signUpTest_step_verifySignUpWasSuccessfull(){}
public void signUpNegativeTest_step_openSignUpPage(){}
public void signUpNegativeTest_step_step_fillFormWithInvalidData(){}
public void signUpNegativeTest_step_step_submitForm(){}
public void signUpNegativeTest_step_verifySignUpWasNotSuccessfull(){}
}
Then, having in the same class state belonging to the 2 tests will be
a mess.
I hope I was clear and you may find this useful. At the end, choosing what will represent your test: if a class or a method is just a decision that I think will depend int: what is the target of a test (in my case, a workflow around a feature), what's easier to implement and maintain, if a test fail how you make the failure more accurate and how you make it easier to debug, what will lead you to more readable code, etc.
This is in part a followup to this question.
I'm not sure the best way to ask this, so I'll try a short story to set the scene:
Once upon a time, there was a class ‘A’, which had a unit test class ‘ATests’ responsible for testing its behaviour through the public interface. They lived happily together for a while and then a change happened, and class ‘B’ came along, which as it turned out had a lot in common with class ‘A’, so a base class was introduced.
The public behaviour of the base class is already covered by the tests for class A. So, the question is what happens next?
• Does class B need to have tests for the common (base class behaviour)? It seems like the behaviour is a part of B, so it should be tested, but should these tests be shared with those for class A? For the base class? If so, what’s the best way to share?
• Does the new base class need unit tests, or is it ok for base classes to be tested through the tests of their children? Does it matter if the base class is abstract?
• Is it enough to ensure that classes A & B derive from the base class and ‘trust’ the unit tests for the base class to test the common behaviour (so the tests don’t need to be replicated in the child classes)? The tests for A & B only need to test they’re new/changed behaviour?
• Am I following totally the wrong approach having approximately one unit test class per real class?
I’ve taken different views at different times and the different approaches can have quite an impact on the ability to refactor the code, time taken to write tests etc. What approaches have people found works best?
Personally, given time, I tend to test all three (base and two derived). It shows that you're not inadvertently overriding the base methods and changing their behavior, and your inherited class still provides the expected semantics. If the behavior really doesn't change, then it could be as simple as a copy-paste job, but it provides more complete coverage.
Note the "given time" part, though. If time is an issue (and it always is), testing the base class or the inherited functionality would probably be lower priority. But testing is great inoculation against yourself, and makes you much more confident when refactoring later, so you're only shortchanging yourself, your customers, and/or your maintainers by not doing as complete coverage as you have time for.
However, pawning repetitive things like this off on dedicated testers or a QA team, if you have one, is perfectly acceptable. But buy them a beer sometimes :-) They make you look better!
You might look at code coverage tools; they can show you if you're actually testing all of the code. Personally if I have a test covering the base class behavior and I"m not overriding that, I won't test it again. The goal is to have a code change (potentially) break only one test.
Don't feel the need to stick to one unit test class per real class. One unit test class per unique setup fixture is one way, and then there's the whole BDD school...
Refactoring test code is just as important as refactoring production code. Both should be treated as first class citizens. So if you are extracting public methods to the base class then that should have its own set of tests. If your test cases are designed properly where each tests tests one thing each then the test refactoring should be easy.
If you are extracting protected functionality then probably its it a slightly grey area. If test methods are new to the class then I would expect them to be tested in the derived class simply because they are probably there for the derived class to function properly. Ideally this should be kept to a minimum as the base class functionality becomes less obvious.
The derived classes then will have the remaining public methods tested in their own set of tests.
Again if you change the production code and not the tests then you should incorporate a coverage tool so you feel confident enough your tests are covered enough.
Let's say I've got two classes, each fully tested. There's duplication though, so I refactor the common code into a new class. Should I unit test that new class? If so, how?
Here's the options I can see:
don't unit test the new class (it's already fully tested by the original tests)
copy+paste the original tests to the new test class
move the original tests to the new test class and replace original tests with mocks
leave original tests alone, but write more fine-grained tests in the new test class
There's duplication though, so I refactor the common code into a new class.
I'm taking that to mean that the two old classes now inherit the common behavior from the new class. If that's the case, then the old test cases should already be testing the common behavior and there's probably no need to write separate tests.
If that's not the case (like if you're creating a utility class whose methods are called by the two original classes), then I'd probably move the tests to their own unit test class so they only need to be in one place.
If your refactored class is covered by existing tests, you are probably ok.
Looking at your options, i would also do number 4. If you did some refactoring, you probably made something more generic than it was before. In that case, you could test the generic functionality in a generic manner. So I would do 4 if your refactored solution is more generic. If it is just moving code around to be DRY, i would probably do 1.
If it was me then I'd give the new class it's own set of unit tests. Those tests would be a copy + paste of the previous tests which ran against the same code.
Although you're duplicating work, in the long term you need to think about how this new class might change and having those tests in a new unit test class/fixture will be cleaner for you.
Suppose I have several unit tests in a test class ([TestClass] in VSUnit in my case). I'm trying to test just one thing in each test (doesn't mean just one Assert though). Imagine there's one test (e.g. Test_MethodA() ) that tests a method used in other tests as well. I do not want to put an assert on this method in other tests that use it to avoid duplicity/maintainability issues so I have the assert in only this one test. Now when this test fails, all tests that depend on correct execution of that tested method fail as well. I want to be able to locate the problem faster so I want to be somehow pointed to Test_MethodA. It would e.g. help if I could make some of the tests in the test class execute in a particular order and when they fail I'd start looking for cause of the failure in the first failing test. Do you have any idea how to do this?
Edit: By suggesting that a solution would be to execute the tests in a particular order I have probably went too far and in the wrong direction. I don't care about the order of the tests. It's just that some of the tests will always fail if a prequisite isn't valid. E.g. I have a test class that tests a DAO class (ok, probably not a UNIT test, but there's logic in the database stored procedures that needs to be tested but that's not the point here I think). I need to insert some records into a table in order to test that a method responsible for retrieving the records (let's call it GetAll()) gets them all in the correct order e.g. I do the insert by using a method on the DAO class. Let's call it Insert(). I have tests in place that verify that the Insert() method works as expected. Now I want to test the GetAll() method. In order to get the database in a desired state I use the Insert() method. If Insert() doesn't work, most tests for GetAll() will fail. I'd prefer to mark the tests that can't pass because Insert() doesn't work as inconclusive rather than failed. It would ease finding the cause of the problem if I know which method/test to look into first.
You can't (and shouldn't) execute unit tests in a specific order. The underlying reason for this is to prevent Interacting Tests - I realize that your motivation for requesting such a feature is different, but that's the reason why unit test frameworks don't allow you to order tests. In fact, last time I checked, xUnit.net even randomizes the order.
One could argue that the fact that some of your tests depend on a different method call on the same class is a symptom of tight coupling, but that's not always the case (state machines come to mind).
However, if possible, consider using a Back Door instead of the other method in question.
If you can't do either that or decouple the interdependency (e.g. by making the first method virtual and using the Extract and Override technique), you will have to live with it.
Here's an example:
public class MyClass
{
public virtual void FirstMethod() { // do something... }
public void SecondMethod() {}
}
Since FirstMethod is virtual, you can derive from MyClass and override its behavior. You can also use a dynamic mock to do that for you. With Moq, it would look like this:
var sutStub = new Mock<MyClass>();
// by default, Moq overrides all virtual methods without calling base
// Now invoke both methods in sequence:
sutStub.Object.FirstMethod(); // overriden by Moq, so it does nothing
sutSutb.Object.SecondMethod();
I think I would indeed have the assertion on the method_A() result in every tests relying on its result, even if this introduces some duplication. Then I would use the assertion message to point to the method_A() failure.
assert("method_A() returned true", true, rc);
Perhaps will I end extracting the method_A() call and the assertion into an helper function to remove the duplication.
Now let's imagine method_A() queries an object and returns it, or NULL when no object is found. Then this assertion is a guard ; and it is necessary with languages suchas C, C++ that do not have NullPointerException.
I'm afraid you can't do this. The only solution is to redesign your code and break it up into smaller methods so that unit tests can call these one by one. Of course this isn't always desirable.
With Visual Studio you can order your tests: see here. But I'd like to advise you to stay away from this technique as much as possible: unit tests are meant to be run anywhere, anytime and in every order.
EDIT: why is this a problem for you? All failing tests point to the same method anyway...