Specifying test dependencies in CppUnit? - c++

I would like to specify the order of testing in CppUnit. According to my research, the testing order depends on either the compiler or linker and how they came across the files.
How does one specify dependencies in CppUnit?
For example, let us consider a rectangle class that has four lines. Each line contains two point classes. Assume that each class is in a separate module or translation unit.
struct Point
{
int x;
int y;
};
struct Line
{
Point a;
Point b;
};
struct Rectangle
{
Line top;
Line left;
Line right;
Line bottom;
};
In the above code, the Point class should be tested first, then the Line class and finally the Rectangle class. There is no reason to test the Rectangle class if the Line or Point classes have problems. This is a very simplified example.
For composite classes, the inner classes or member data type classes should be test first.
Let us assume that each class has an associated testing class. Each test class has its own published test methods (which are registered to the CppUnit list), in separate files. The class for testing Lines has no knowledge of the testing class for points; and similar for the rectangle. When these test case classes are compiled, their order is dependent on the compiler and linker.
So, how does one order the test cases?
FYI, I am using CppUnit, wxTestRunner and Visual Studio 2008

What you're trying to do isn't really unit testing. "Pure" unit testing is intended to test individual units (individual classes), using mocks or fake objects in the place of real dependencies; once you're testing classes' dependencies on each other, that's integration testing, not unit testing.
With that disclaimer out of the way...
It looks like you might be able to use CPPUNIT_TEST_SUITE_NAMED_REGISTRATION to create multiple suites then run each suite in order, only if all previous suites have passed, but you might need to hack up or replace wxTestRunner test runner to do this.
CppUnit's page on Creating TestSuite has other options for registering test suites; CPPUNIT_REGISTRY_ADD, for example, lets you create a hierarchy of suites, which should give you some control over the ordering, but I don't see any way for a failure in one suite to abort subsequent tests.
Finally, just as a suggestion, CppUnit is probably not the best C++ unit testing framework these days. I'm personally a fan of Google Test, but Boost.Test and UnitTest++ are also good. (This answer introduces a personal project called Saru that sounds like it might give you the flexibility you want of ordering tests.)

Related

link a test-class to its main class

I routinely create, for each Java class, a corresponding test class. However, it seems that, from the compiler point of view, they are not linked in any way. Of course I can link them with mutual referrals in Javadoc comments, however, I wonder if there is a more standard way to tell the world "Class A is the JUnit test of class B" and "class B is the main class tested by class A".
You can't really have any compile-time reference since you typically distribute main code without test code. There is only a naming convention. For a class named Foo I use:
FooTest for unit tests
FooSomeSpecificPartOrBehaviorTest for unit tests of only a subset of class behaviors.
FooIT for integration tests
This convention is pretty standard and for example in IntelliJ IDEA Ctrl + Shift + T allows you to quickly navigate between main and test classes, only based on naming convention.
Usually the "connection" between the class under test and the test class is established via a naming convention: SHA1DigestTest contains the test for the SHA1Digest class.

Many Test classes or one Test class with many methods?

I have a PersonDao that I'm writing unit tests against.
There are about 18-20 methods in PersonDao of the form -
getAllPersons()
getAllPersonsByCategory()
getAllPersonsUnder21() etc
My Approach to testing this was to create a PersonDaoTest with about 18 test methods testing each of the method in PersonDao
Then I created a PersonDaoPaginationTest that tested these 18 methods by applying pagination parameters.
Is this in anyway against the TDD best practices? I was told that this creates confusion and is against the best practices since this is non-standard. What was suggested is merging the two classes into PersonDaoTest instead.
As I understand is, the more broken down into many classes your code is, the better, please comment.
The fact that you have a set of 18 tests that you are going to have to duplicate to test a new feature is a smell that suggests that your PersonDao class is taking on multiple responsibilities. Specifically, it appears to be responsible both for querying/filter and for pagination. You may want to take a look at whether you can do a bit of design work to extract the pagination functionality into a separate class which could then be tested independently.
But in answer to your question, if you find that you have a class that you want to remain complex, then it's perfectly fine to use multiple test classes as a way of organizing a large number of tests. #Gishu's answer of grouping tests by their setup is a good approach. #Ryan's answer of grouping by "facets" or features is another good approach.
Can't give you a sweeping answer without looking at the code... except use whatever seems coherent to you and your team.
I've found that grouping tests based on their setup works out nicely in most cases. i.e if 5 tests require the same setup, they usually fit nicely into a test-fixture. if the 6th test requires a different setup (more or less) break it out into a separate test fixture.
This also leads to test-fixtures that are feature-cohesive (i.e. tests grouped on feature), give it a try. I'm not aware of any best practice that says you need to have one test class per production class... in practice I find I have n test classes per production classes, the best practice would be to use good names and keep related tests close (in a named folder).
My 2 cents: when you have a large class like that that has different "facets" to it, like pagination, I find it can often make for more understandable tests to not pack them all into one class. I can't claim to be a TDD guru, but I practice test-first development religiously, so to speak. I don't do it often, but it's not exactly rare, either, that I'll write more than a single test class for a particular class. Many people seem to forget good coding practices like separation of concerns when writing tests, though. I'm not sure why.
I think one test class per class is fine - if your implementation has many methods, then your test class will have many methods - big deal.
You may consider a couple of things however:
Your methods seem a bit "overly specific" and could use some abstraction or generalisation, for example instead of getAllPersonsUnder21() consider getAllPersonsUnder(int age)
If there are some more general aspects of your class, consider testing them using some common test code using call backs. For a trivial example to illustrate testing that both getAllPersons() returns multiple hits, do this:
#Test
public void testGetAllPersons() {
assertMultipleHits(new Callable<List<?>> () {
public List<?> call() throws Exception {
return myClass.getAllPersons(); // Your call back is here
}
});
}
public static void assertMultipleHits(Callable<List<?>> methodWrapper) throws Exception {
assertTrue("failure to get multiple items", methodWrapper.call().size() > 0);
}
This static method can be used by any class to test if "some method" returns multiple hits. You could extends this to do lots of tests over the same callback, for example running it with and without a DB connection up, testing that it behaves correctly in each case.
I'm working on test automation of a web app using selenium. It is not unit testing but you might find that some principles apply. Tests are very complex and we figured out that the only way to implement tests in a way that meets all our requirements was having 1 test per class. So we consider that each class is an individual test, then, we were able to use methods as the different steps of the test. For example:
public SignUpTest()
{
public SignUpTest(Map<String,Object> data){}
public void step_openSignUpPage(){}
public void step_fillForm(){}
public void step_submitForm(){}
public void step_verifySignUpWasSuccessfull(){}
}
All the steps are dependent, they follow the order specified and if someone fail the others will not be executed.
Of course, each step is a test by itself, but they all together form the sing up test.
The requirements were something like:
Tests must be data driven, this is, execute the same test in parallel with different inputs.
Tests must run in different browsers in parallel as well. So each
test will run "input_size x browsers_count" times in parallel.
Tests will focus in a web workflow, for example, "sign up with valid data" and they will be split into smaller tests units for each step of the workflow. It will make things easier to
maintain, and debug (when you have a failure, it will say:
SignUpTest.step_fillForm() and you'll know immediately what's wrong).
Tests steps share the same test input and state (for example, the id of the user created). Imagine if you put in the same class
steps of different tests, for example:
public SignUpTest()
{
public void signUpTest_step_openSignUpPage(){}
public void signUpTest_step_step_fillForm(){}
public void signUpTest_step_step_submitForm(){}
public void signUpTest_step_verifySignUpWasSuccessfull(){}
public void signUpNegativeTest_step_openSignUpPage(){}
public void signUpNegativeTest_step_step_fillFormWithInvalidData(){}
public void signUpNegativeTest_step_step_submitForm(){}
public void signUpNegativeTest_step_verifySignUpWasNotSuccessfull(){}
}
Then, having in the same class state belonging to the 2 tests will be
a mess.
I hope I was clear and you may find this useful. At the end, choosing what will represent your test: if a class or a method is just a decision that I think will depend int: what is the target of a test (in my case, a workflow around a feature), what's easier to implement and maintain, if a test fail how you make the failure more accurate and how you make it easier to debug, what will lead you to more readable code, etc.

what to unit test after refactoring common code into separate class

Let's say I've got two classes, each fully tested. There's duplication though, so I refactor the common code into a new class. Should I unit test that new class? If so, how?
Here's the options I can see:
don't unit test the new class (it's already fully tested by the original tests)
copy+paste the original tests to the new test class
move the original tests to the new test class and replace original tests with mocks
leave original tests alone, but write more fine-grained tests in the new test class
There's duplication though, so I refactor the common code into a new class.
I'm taking that to mean that the two old classes now inherit the common behavior from the new class. If that's the case, then the old test cases should already be testing the common behavior and there's probably no need to write separate tests.
If that's not the case (like if you're creating a utility class whose methods are called by the two original classes), then I'd probably move the tests to their own unit test class so they only need to be in one place.
If your refactored class is covered by existing tests, you are probably ok.
Looking at your options, i would also do number 4. If you did some refactoring, you probably made something more generic than it was before. In that case, you could test the generic functionality in a generic manner. So I would do 4 if your refactored solution is more generic. If it is just moving code around to be DRY, i would probably do 1.
If it was me then I'd give the new class it's own set of unit tests. Those tests would be a copy + paste of the previous tests which ran against the same code.
Although you're duplicating work, in the long term you need to think about how this new class might change and having those tests in a new unit test class/fixture will be cleaner for you.

NUnit, TestDriven.Net: Duplicate test results with partial test classes

I just discovered that I was getting twice the number of tests run that I should've been getting. Discovered it when a test broke and I got two identical test failures. Same test, same everything. Got me quite confused, but managed to narrow it down to a certain test class that was a partial class.
The reason it was a partial class was that I had split a test class in two, just to make it a bit more clean. The class under test had a certain method that required a long range of tests and I thought it would be cleaner to have those in a separate file. But since there was one or two helper methods used I figured I could just make the class partial so both files still had access to those methods.
The test framework is NUnit and the tests was run by using TestDriven.Net. Ran the tests both from inside a single test method (reported two tests passed instead of one), on the class (got twice the number of tests passed) and on the whole test project.
Managed to fix the issue by making the classes not partial and just duplicating those tiny helper methods (might move those to a separate helper class or something later).
Now... why on earth is this happening? I thought partial classes were compiled into a single class? Is this an issue with partial classes in general, NUnit, Test-Driven.net or something completely different?
You probably put the [TestFixture] attribute in both files of the partial class. This will cause TestFixture to be emitted twice in the IL class definition and NUnit will run the same test code twice. You should only add [TestFixture] in one of the files for your partial class.

Is it okay to store test-cases inside the corresponding class? (C++)

I have started to write my tests as static functions in the class I want to test. These functions usually test functionality, they create a lot of objects, measure memory consumption (to detect leaks), etc.
Is this good or bad practice? Will it bite me in the long run?
I keep them separate, because I don't want test classes to be included in the deployable artifacts. No sense increasing the size of the .exe or making them available to clients.
I'd recommend writing unit tests with CppUnit.
No, you should write unit tests instead.
I would say that it isn't best-practice, but it is "okay", in that it won't break your program or cause it to crash. There are a lot of things that are okay to do, but you shouldn't do.
Test code is fundamentally different from production code in terms of ownership, deployment, non-functional requirements and so on. Therefore it is better to keep it separate from the code being tested, in separate files and probably even in separate directories.
To facilitate whitebox unit testing for the class under test, you often need to declare the test class/test functions as friend. Some classes are unit-testable with the public members only so adding friends is not always necessary.
Combining test code and code under test is simple: you just link the object files together in the same project.
Sometimes you can see unit test code that #includes the code under test but I would advice against that - for example, if you have test coverage measurement tooling in place (highly recommended!), the measures won't be correct for the code under test.
If you have your test cases inside your class, it's hard to have things like fixtures.
I am also going to give a shout out to Boost.Test. The learning curve is a little high but it is amazing once you get used to it.
That is bad practice.
You should have a separate class to test the class you are creating. The way you are doing you are bloating production code with test code. This is not what you should do.
The way you want to test the class Foo is like this:
//Foo.cpp
class Foo {
public:
int GetInt() { return 15; }
};
//FooTest.cpp
TEST(FooTest, testGetIntShouldReturn15) {
Foo foo;
ASSERT_EQUAL(15, foo.GetInt());
}