Using qExec to create Qt Test suite - c++

QTest encourages you to organize unit tests as separate executables. There is special macro for this, that generates the main function: QTEST_MAIN.
I found this approach not very clean, it is much more useful to run all tests at once. So i searched if there is any possibility of doing so and I found a few people proposing the same solution:
Qt: run unit tests from multiple test classes and summarize the output from all of them
http://www.davideling.it/2014/01/qtest-multiple-unit-test-classes/
https://alexhuszagh.github.io/2016/using-qttest-effectively/
The solution was to give up using QTEST_MAIN macro and write your own main function where you execute the tests you want to execute:
int main(int argc, char *argv[])
{
int status = 0;
{
TestA ta;
status |= QTest::qExec(&ta, argc, argv);
}
{
TestB tb;
status |= QTest::qExec(&tb, argc, argv);
}
return status;
}
I found it to be a great idea, however, there is a problem. Qt's documentation for qExec has a part that sounds like this:
For stand-alone test applications, this function should not be called
more than once, as command-line options for logging test output to
files and executing individual test functions will not behave
correctly.
The solution revealed by those people suggest just that: executing qExec more than once. Can anyone explain to me what exactly command-line options for logging test output to files and executing individual test functions will not behave correctly exactly mean?
What exactly could go wrong with this approach?

The documentation is probably talking about Logging Options. If you call qMain twice and pass the -o option to both calls, the second call will probably overwrite the log file from the first call. If you know that this will never happen you might choose to ignore the warning. You can also not pass the command line arguments to qExec, that way you will force the output to stdout, but you lose the ability to pass other arguments, of course.
If you want to run the test cases from Qt Creator, you should also not call qExec more than once. Each test class will show up in the test list, but running one will just run all, so you get the result for every class displayed for one class. And if you run all tests (the default), you get a the squared amount of results.
So if you don't like the multiple executable approach, just use Google Test. It has none of the above problems and the Creator provides support for it. The setup is very easy: A wizard will guide you when you create an Autotest project. The only thing you have to do is download Google Test.
Google test cases will show up right next to Qt Test cases in the test views.

Related

Unit testing D library

The built-in unit testing functionality (unittest {...} code blocks) seems to only be activated when running.
How can I activate unit tests in a library with no main function?
This is somewhat related to this SO question, although the accepted answer there deals with a workaround via the main function.
As an example, I would expect unit testing to fail on a file containing only this code:
int foo(int i) { return i + 1; }
unittest {
assert(foo(1) == 1); // should fail
}
You'll notice I don't have module declared at the top. I'm not sure if that matters for this specific question, but in reality I would have a module statement at the top.
How can I activate unit tests in a library with no main function?
You can use DMD's -main switch, or rdmd's --main switch, to add an empty main function to the set of compiled source files. This allows creating a unit test binary for your library.
If you use Dub, dub test will do something like the above automatically.

How to make CppUnit logging more verbose?

I'm using CppUnit to write unit tests for a C++ library. By default it prints a single "." character to the console for each test. I'd like to log the name of each test on a separate line, before the test runs.
I've looked into the CppUnit API, but it's not at all obvious how to customize the output. Instead of offering customization options, it's more of a framework that you can plug new handlers into. (The tutorial hasn't helped, either.) I could probably spend a day figuring out how to do this, but I can't afford to lose the time. Could someone provide a quick snippet that can customize the per-test log output?
It is simple enough to define and install a custom progress listener to emit the name of each test before it's performed. Here's one I wrote today:
class MyCustomProgressTestListener : public CppUnit::TextTestProgressListener {
public:
virtual void startTest(CppUnit::Test *test) {
fprintf(stderr, "starting test %s\n", test->getName().c_str());
}
};
Install it on a test runner like this:
CppUnit::TextUi::TestRunner runner;
MyCustomProgressTestListener progress;
runner.eventManager().addListener(&progress);

iOS Unit Testing: XCTestSuite, XCTest, XCTestRun, XCTestCase methods

In my daily unit test coding with Xcode, I only use XCTestCase. There are also these other classes that don't seem to get used much such as: XCTestSuite, XCTest, XCTestRun.
What are XCTestSuite, XCTest, XCTestRun for? When do you use them?
Also, XCTestCase header has a few methods such as:
defaultTestSuite
invokeTest
testCaseWithInvocation:
testCaseWithSelector:
How and when to use the above?
I am having trouble finding documentation on the above XCTest-classes and methods.
Well, this question is pretty good and I just wondering why this question is being ignored.
As the document saying:
XCTestCase is a concrete subclass of XCTest that should be the override point for
most developers creating tests for their projects. A test case subclass can have
multiple test methods and supports setup and tear down that executes for every test
method as well as class level setup and tear down.
On the other hand, this is what XCTestSuite defined:
A concrete subclass of XCTest, XCTestSuite is a collection of test cases. Alternatively, a test suite can extract the tests to be run automatically.
Well, with XCTestSuite, you can construct your own test suite for specific subset of test cases, instead of the default suite ( [XCTestCase defaultTestSuite] ), which as all test cases.
Actually, the default XCTestSuite is composed of every test case found in the runtime environment - all methods with no parameters, returning no value, and prefixed with ‘test’ in all subclasses of XCTestCase.
What about the XCTestRun class?
A test run collects information about the execution of a test. Failures in explicit
test assertions are classified as "expected", while failures from unrelated or
uncaught exceptions are classified as "unexpected".
With XCTestRun, you can record information likes startDate, totalDuration, failureCount etc when the test is starting, or somethings like hasSucceeded when done, and therefore you got the result of running a test. XCTestRun gives you controlability to focus what is happening or happened about the test.
Back to XCTestCase, you will notice that there are methods named testCaseWithInvocation: and testCaseWithSelector: if you read source code. And I recommend you to do for more digging.
How do they work together ?
I've found that there is an awesome explanation in Quick's QuickSpec source file.
XCTest automatically compiles a list of XCTestCase subclasses included
in the test target. It iterates over each class in that list, and creates
a new instance of that class for each test method. It then creates an
"invocation" to execute that test method. The invocation is an instance of
NSInvocation, which represents a single message send in Objective-C.
The invocation is set on the XCTestCase instance, and the test is run.
Some links:
http://modocache.io/probing-sentestingkit
https://github.com/Quick/Quick/blob/master/Sources/Quick/QuickSpec.swift
https://developer.apple.com/reference/xctest/xctest?language=objc
Launch your Xcode, and use cmd + shift + O to open the quickly open dialog, just type 'XCTest' and you will find some related files, such as XCTest.h, XCTestCase.h ... You need to go inside this file to check out the interfaces they offer.
There is a good website about XCTest: http://iosunittesting.com/xctest-assertions/

xunit programmatically add new tests/"[Facts]"?

We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.