I want to create some tests with boost::unit_test for my parallel (mpi based) C++ codes. I have some basic experiences in using the test framework. For me the main problem, when going to parallel codes, is where to put MPI::Init, such that it is called first. In the test suites I have created there is no main function. Furthermore, does Boost::Test exists correctly (with respect to mpi) when some assertions fail on a subset of the existing ranks?
Boost Test has fixture support, which allows you to perform setup/cleanup per test case, test suite, or globally. Sounds like you should put the call to MPI::Init in a global fixture.
struct MPIFixture {
MPIFixture() { MPI::Init(); }
~MPIFixture() { /* I bet there's a deinit you should call */ }
};
BOOST_GLOBAL_FIXTURE(MPIFixture);
If you have trouble working with that, or if you are working in a framework that provides its own main function, then you can #define BOOST_TEST_NO_MAIN before including the Boost headers. Then you can invoke boost::unit_test::unit_test_main yourself to run your test suites.
Related
I'm trying to test parts of my code. I wrote the following test.h file:
#include <boost/test/unit_test.hpp>
BOOST_AUTO_TEST_CASE(my_test) {
BOOST_CHECK(true);
}
If I run the test, my application's main method is invoked and since the command line arguments are missing, it terminates. I want to just run the test suite as it is and succeed since BOOST_CHECK on true should be a passed test. Once this works, I would add calls to functions from my code base one by one for regression testing. Is this possible to do? If yes, how?
This post suggests adding the following define to the top of the test.h file but it does not work for skipping the main method invocation:
#define BOOST_TEST_NO_MAIN true
BOOST_TEST_NO_MAIN makes Boost.Test omit its own main function, therefore it will fall back to the applications main function.
In you unit tests, do not link the applications main function (do not add the file which contains the main), and let Boost.Test add its own main, which will run all your tests.
The built-in unit testing functionality (unittest {...} code blocks) seems to only be activated when running.
How can I activate unit tests in a library with no main function?
This is somewhat related to this SO question, although the accepted answer there deals with a workaround via the main function.
As an example, I would expect unit testing to fail on a file containing only this code:
int foo(int i) { return i + 1; }
unittest {
assert(foo(1) == 1); // should fail
}
You'll notice I don't have module declared at the top. I'm not sure if that matters for this specific question, but in reality I would have a module statement at the top.
How can I activate unit tests in a library with no main function?
You can use DMD's -main switch, or rdmd's --main switch, to add an empty main function to the set of compiled source files. This allows creating a unit test binary for your library.
If you use Dub, dub test will do something like the above automatically.
I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.
I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}
I am using the provided Unit Test Engine in Visual Studio 2005 and am wondering if there is a way for me to specify the order of tests. I have numerous test classes and numerous test methods inside of each. I would like to control the order in which the test classes are executed and the order of the test methods in each.
Why? Actually tests are supposed to be able to run alone. If you need to execute some particular code before running a test you should have that in your test class:
//Use TestInitialize to run code before running each test
[TestInitialize()]
public void MyTestInitialize()
{
}
//Use TestCleanup to run code after each test has run
[TestCleanup()]
public void MyTestCleanup()
{
}
I don't think the order of your test execution should matter. But if you really need to order them fellow Matthew Whited link.
you can use Ordered Tests
http://msdn.microsoft.com/en-us/library/ms182629(VS.80).aspx