The built-in unit testing functionality (unittest {...} code blocks) seems to only be activated when running.
How can I activate unit tests in a library with no main function?
This is somewhat related to this SO question, although the accepted answer there deals with a workaround via the main function.
As an example, I would expect unit testing to fail on a file containing only this code:
int foo(int i) { return i + 1; }
unittest {
assert(foo(1) == 1); // should fail
}
You'll notice I don't have module declared at the top. I'm not sure if that matters for this specific question, but in reality I would have a module statement at the top.
How can I activate unit tests in a library with no main function?
You can use DMD's -main switch, or rdmd's --main switch, to add an empty main function to the set of compiled source files. This allows creating a unit test binary for your library.
If you use Dub, dub test will do something like the above automatically.
Related
I often wrote my test code in the main function while developing an API but because D has integrated unittest I want to start using them.
My current work flow is the following, I have a script that watches for file changes in any .d files, if the scripts finds a modified file it will run dub build
The problem is that dub build doesn't seem to build the unittest
module foo
struct Bar{..}
unittest{
...
// some syntax error here
...
}
It only compiles the unittests if I explicitly run dub test. But I don't want to run and compile them at the same time.
The second problem is that I want to be able to run unittests for a single module for example
dub test module foo
Would this be possible?
You can program a custom test runner using the trait getUnittests.
getUnitTests
Takes one argument, a symbol of an aggregate (e.g. struct/class/module). The result is a tuple of all the unit test functions of that aggregate. The functions returned are like normal nested static functions, CTFE will work and UDA's will be accessible.
in your main() you should be able to write something that takes an arbitrary number of module:
void runModuleTests(Modules...)()
{
static if (Modules.length > 1)
runModuleTests!(Modules[1..$]);
else static if (Modules.length == 1)
foreach(test; __traits(getUnitTests, Modules[0])) test;
}
of course the switch -unittest must be passed to dmd
We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.
I want to create some tests with boost::unit_test for my parallel (mpi based) C++ codes. I have some basic experiences in using the test framework. For me the main problem, when going to parallel codes, is where to put MPI::Init, such that it is called first. In the test suites I have created there is no main function. Furthermore, does Boost::Test exists correctly (with respect to mpi) when some assertions fail on a subset of the existing ranks?
Boost Test has fixture support, which allows you to perform setup/cleanup per test case, test suite, or globally. Sounds like you should put the call to MPI::Init in a global fixture.
struct MPIFixture {
MPIFixture() { MPI::Init(); }
~MPIFixture() { /* I bet there's a deinit you should call */ }
};
BOOST_GLOBAL_FIXTURE(MPIFixture);
If you have trouble working with that, or if you are working in a framework that provides its own main function, then you can #define BOOST_TEST_NO_MAIN before including the Boost headers. Then you can invoke boost::unit_test::unit_test_main yourself to run your test suites.
I am using Boost.Test library for implementing unit test cases in C++. Suppose I have two suites such as
BOOST_AUTO_TEST_SUITE(TestA)
BOOST_AUTO_TEST_CASE(CorrectAddition)
{
BOOST_CHECK_EQUAL(2+2, 4);
}
BOOST_AUTO_TEST_CASE(WrongAddition)
{
BOOST_CHECK_EQUAL(2 + 2, 5);
}
BOOST_AUTO_TEST_SUITE_END()
BOOST_AUTO_TEST_SUITE(TestB)
BOOST_AUTO_TEST_CASE(CorrectAddition)
{
bool ret = true;
BOOST_CHECK_EQUAL(ret, true);
}
BOOST_AUTO_TEST_CASE(WrongAddition)
{
BOOST_CHECK_EQUAL(2 + 2, 5);
}
BOOST_AUTO_TEST_SUITE_END()
and I would like to run only say suite 'TestB', how shall I execute it.
I really thank for your time and help. Sorry if this question is been posted or documented else where.
Conform to this documentation, the OP should call the unit test executable with the following parameter
--run_test=TestB
to run only the unit tests of test suite TestB.
If the unit test CorrectAddition of all test suites shall be run, then the parameter is
--run_test=*/CorrectAddition
Wildcard ability of Boost.Test is quite powerful, hence the parameter could also be written as
--run_test=*/C*
Assuming you are using the library-supplied main entry point, command-line parsing, etc., and haven't rolled your own, you can select specific test suites and test cases by name or pattern via a command-line switch at run time.
See this page in the documentation for a good example.
I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.