How to make CppUnit logging more verbose? - c++

I'm using CppUnit to write unit tests for a C++ library. By default it prints a single "." character to the console for each test. I'd like to log the name of each test on a separate line, before the test runs.
I've looked into the CppUnit API, but it's not at all obvious how to customize the output. Instead of offering customization options, it's more of a framework that you can plug new handlers into. (The tutorial hasn't helped, either.) I could probably spend a day figuring out how to do this, but I can't afford to lose the time. Could someone provide a quick snippet that can customize the per-test log output?

It is simple enough to define and install a custom progress listener to emit the name of each test before it's performed. Here's one I wrote today:
class MyCustomProgressTestListener : public CppUnit::TextTestProgressListener {
public:
virtual void startTest(CppUnit::Test *test) {
fprintf(stderr, "starting test %s\n", test->getName().c_str());
}
};
Install it on a test runner like this:
CppUnit::TextUi::TestRunner runner;
MyCustomProgressTestListener progress;
runner.eventManager().addListener(&progress);

Related

How to configure unit testing for AnyLogic agent code?

How do you configure unit testing framework to help develop code that is part of AnyLogic agents?
To have a suitable test driven development rhythm, I need to be able to run all tests in a few seconds. I thought of exporting the project as a standalone application (jar) each time, but that's pretty slow.
I thought of trying to write all the code outside AnyLogic in separate classes, but there are many references to built-in AnyLogic classes, as well as various agents. My code would need to refer to these somehow, and I'm not sure how to do that except by writing the code inside AnyLogic.
I wonder if there's a way of adding the test runner as a dependency, and executing that test runner from within AnyLogic.
Does anyone have a setup that works nicely?
This definitely requires some advanced Java, but testing, especially unit testing is too often neglected in building good robust models. I hope this simple example is enough to get you (and lots of other modellers) going.
For Junit testing, we make use of two libraries that you can add as a dependency to your model.
Now there are two main types of logic that you will want to test in simulation models.
Functions in Java classes
Model execution
Type 1: Suppose I have this very simple Java class
public class MyClass {
public MyClass() {
}
public boolean getResult() {
return true;
}
}
And I want to test the function getResult()
I can simply create a new class and create a function that I annotate with the #Test modifier and then also make use of the assertEquals() method, which is standard in junit testing
import org.junit.Test;
import static org.junit.Assert.assertEquals;
public class MyTestClass{
#Test
public void testMyClassFunction1() {
boolean result = new MyClass().getResult();
assertEquals("The value of the test class 1", result, true);
}
Now comes the AnyLogic specific implementation (there are other ways to do this but this is the easiest/most useful, you will see in a minute)
You need to create a custom experiment
Now if you run this from the Run Model button you will get this output
SUCCESS
Run: 1
Failed: 0
You can obviously update and change the output as to your liking
Type 2: Suppose we have this very simple model
And the function getResult() simply returns an int of 2.
Now we need to create another custom experiment to run this model
And then we can write a test to run this Custom Experiment and check the result
Simply add the following to your MyTestClass
#Test
public void testMyClassFunction2() {
int result = new SingleRun(null).runExperiment();
assertEquals("Value of a single run", result, 2);
}
And now if you run the RunAllTests customer experiment it will give you this output
SUCCESS
Run: 2
Failed: 0
This is just the beginning, you can read up tons on using junit to your advantage

xunit programmatically add new tests/"[Facts]"?

We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}

Resharper running all tests when only a single one is selected

I'm using Resharper 4.5 with Visual Studio 2008 and MBUnit testing, and there seems to be something odd with using ReSharpher to run the tests.
On the side there are the icons beside the class each test method with the options Run and Debug. When I select Run it just shows me the results of the single test. However I noticed that the test was taking a considerably long time to run.
When I ran Sql Server profiler and start stepping through the code, I realized that its not just running the selected test, but every single one in the class. Is there any reason it makes it look like its only running one unit test while actually running them all?
Its getting to be a pain waiting for all integration tests to run when I only care about the reuslt of one, is there any way to change this?
I just encountered this today and I think I might have realized what causes this bug, I had my methods named similarly
[TestMethod]
public void TestSomething()
[TestMethod]
public void TestSomethingPart2()
I saw that running TestSomething() would run both, however running TestSomethingPart2() would not. I concluded if you name methods that an exact match can occur for the method name it will run the test. After renaming my second test to TestPart2Something this issue went away.
I can confirm that this is a problem with ReSharper 5.1.
To reproduce run test A from my sample code below (all tests will execute); run test AB (all except A will execute); etc:
[TestMethod]
public void A()
{
Console.WriteLine("A");
}
[TestMethod]
public void AB()
{
Console.WriteLine("AB");
}
[TestMethod]
public void ABC()
{
Console.WriteLine("ABC");
}
[TestMethod]
public void ABCD()
{
Console.WriteLine("ABCD");
}
[TestMethod]
public void ABCDE()
{
Console.WriteLine("ABCDE");
}
It took me ages to work this out. I had the remote debugger attached to a development server, and it was breaking a bit more often than I was expecting it to...
It seems to be doing a StartsWith instead of a Contains as others have said.
The workaround is to not have test method names that start with the name of another test method name.
I hope this shows up under Chris post.
I had a similar situation that confirms the behavior he noticed.
[TestMethod()]
public void ArchiveAccountTest()
[TestMethod()]
public void ArchiveAccountTestRestore()
So running the first method would execute both and running the second would not. Renamed my second method to TestRestore and the problem went away.
Note: I'm using Resharper 5.1 so it's still a problem.
When you right-click in the editor, the context menu appears from which you can run and debug tests. Right-click inside a test method to run or debug that single test. Right-click outside of any test method to run or debug the entire test class contained in the current file.
The current release of Gallio includes a Unit Test runner with MbUnit (and NUnit) support built-in.
From the Resharper menu, you have the option of running a Single unit test or all Tests in your solution. What is cool, is that the Keyboard-shortcuts for this are:
Alt + R, U, R - Run test from current context (if you are at a [Test] level, it runs one test, if you are at a [TestFixture] level, it runs all in the fixture!)
Alt + R, U, N - Runs all Unit Tests in your Solution
I highly recommend that you uninstall your current Gallio and then check C:\Program Files\Jetbrains\Resharper\plugins\bin and clear out and files there. Then install Gallio afresh.
Once you've done this, you should startup VS2008 and goto at the Resharper | Plugins menu to check that the Gallio plugin is active. This will give you support for MbUnit.