cppunit to use command line arguments - c++

I have a CPP unit test which tests a class which is designed to read configuration: we can call this class
Config
The config class has the capacity of doing
Config c;
c.read("/tmp/random-tmp-directory/test.conf");
The random-temp-directory is created by a bash script and should be passed into the test binary.
#!/bin/bash
TEMPDIR=$(mktemp)
cp files/config/test.conf $TEMPDIR/.
./testConfig $(mktemp)/test.conf
The above creates a temp directory, copies our temporary file and passes the path to the test, so it can load the correct file.
Is there a way to tell CPPUNIT to send the commandline arguments, or any arguments to the test registry?
Here is my testConfig.cpp
#include <all the required.h>
CPPUNIT_TEST_SUITE_REGISTRATION(testConfig);
int main(int argc, char ** argv)
{
CPPUNIT_NS::TestResult testresult;
CPPUNIT_NS::TestRunner runner;
CPPUNIT_NS::TestFactoryRegistry &registry = CPPUNIT_NS::TestFactoryRegistry::getRegistry();
// register listener for collecting the test-results
CPPUNIT_NS::TestResultCollector collectedresults;
testresult.addListener(&collectedresults);
runner.addTest(registry.makeTest());
runner.run(testresult);
// Print test in a compiler compatible format.
CppUnit::CompilerOutputter outputter( &collectedresults, std::cerr );
outputter.write();
return collectedresults.wasSuccessful() ? 0 : 1;
}

Consider dividing your code into at least three distinct methods: the part that constructs the config file name, the part that reads the config file, and the part that parses what was read from the config file. You can easily and thoroughly unit test both the file name builder and the parser methods. And as long as you can test simply reading data from the file even one time, you should be golden.
[edit]
For example, you might have a method like string & assembleConfigFileName(string basepath, string randompath, string filename) that takes in the different components of your path and filename, and puts them together. One unit test should look like this:
void TestConfig::assembleConfigFileName_good()
{
string goodBase("/tmp");
string goodPath("1234");
string goodName("test.conf");
string actual(assembleConfigFileName(goodBase, goodPath, goodName));
string expected("/tmp/1234/test.conf");
CPPUNIT_ASSERT_EQUAL(expected, actual);
}
Now you can test that you're building the fully qualified config file name exactly correctly. The test is not trying to read a file. The test is not trying to generate a random number. The test is providing an example of exactly what kinds of input the routine needs to take, and stating exactly what the output should look like given that exact input. And it's proving the code does exactly that.
It's not important for this routine to actually read a config file out of a temp directory. It's only important that it generate the right file name.
Similarly, you build a unit test to test for each possible flow through your code, including error scenarios. Let's say you wrote an exception handler that throws if the random path is wrong. Your unit test will test the exception mechanism:
void TestConfig::assembleConfigFileName_null_path()
{
string goodBase("/tmp");
string nullPath;
string goodName("temp.config");
CPPUNIT_ASSERT_THROW(assembleConfigFileName(goodBase, nullPath, goodName), MissingPathException);
}
The tests are now a document that says exactly how it works, and exactly how it fails. And they prove it every single time you run the tests.
Something you appear to be trying to do is to create a system test, not a unit test. In a unit test, you do NOT want to be passing in randomly pathed config files. You aren't trying to test the external dependencies, that the file system works, that a shell script works, that $TMPDIR works, none of that. You're only trying to test that the logic you've written works.
Testing random files in the operating system is very appropriate for automated system tests, but not for automated unit tests.

Related

Running a test for every file in a directory, dynamically

I have an application written in C++, configured with CMake, tested using Catch2, the tests are invoked through CTest.
I have a fairly large list of files that each contain captures of messages that have caused an issue in my application in the past. I currently have a single test that runs through each of these files serially using code that looks approximately like:
TEST_CASE("server type regressions", "[server type]") {
auto const state = do_some_setup();
for (auto const path : files_to_test()) {
INFO(path);
auto parser = state.make_parser(path);
for (auto const message : parser) {
INFO(message);
handle(message);
}
}
}
The message handler has a bunch of internal consistency checks, so when this test fails, it typically does so by throwing an exception.
Is it possible to improve this solution to get / keep the following:
Run the initial do_some_setup once for all of the tests, but then run the test for each file in parallel. do_some_setup is fairly slow, and I have enough files relative to the number of cores that I wouldn't want to have to do setup per file. It would also be acceptable to run do_some_setup more than once, as long as it's better than O(n) in the number of files.
Run the regression test on all the files, even when an earlier file fails. I know I could do this with a try + catch and manually setting a bool has_failed on any failure, but I'd prefer if there were some built-in way to do this?
Be able to specify the file name when invoking tests, so that I can manually run just the test for a single file
Automatically detect the set of files. I would prefer not having to change to a solution where I need to add test files to the test file directory and also update some other location that lists all of the files I'm testing to manually shard them
I'm willing to write some CMake to manage this, pass some special flags to CTest or Catch2, or change to a different unit testing framework.

How to test a command line options parser using GTest

I am developing a command line options processor for my app. I have decided to use GTest to test it. It's implementation has been shown below in brief:
int main(int argv, char **argv)
{
if (!ProcessOptions(argc, argv)
{
return 1;
}
// Some more code here
return 0;
}
int ProcessOptions(int argc, char **argv)
{
for (int i = 1; i < argc; ++i)
{
CheckOption(argv[i]);
CheckArgument();
if (Success)
{
EnableOption();
}
}
}
The code runs as expected, but the problem is: I want to test this using GTest by supplying different options (valid and invalid) to it. The GTest manual reads:
The ::testing::InitGoogleTest() function parses the command line for
googletest flags, and removes all recognized flags. This allows the
user to control a test program's behavior via various flags, which
we'll cover in the AdvancedGuide. You must call this function before
calling RUN_ALL_TESTS(), or the flags won't be properly initialized.
But this way, I will be able to test just one sequence. I want to do this multiple times for different options. How do I do that?
Is there any better strategy to achieve this? Can I do this using test fixtures?
Have you considered a value-parameterized test? They sound perfect for your situation:
Value-parameterized tests allow you to test your code with different parameters without writing multiple copies of the same test. This is useful in a number of situations, for example:
You have a piece of code whose behavior is affected by one or more command-line flags.
You want to test different implementations of an OO interface.
You want to make sure your code performs correctly for various values of those flags.
You could write one or more test(s) which define the expected behaviour of the command line argument parser and then pass the command line flags down to it this way.
A full code example is shown in the link to the Google Test GitHub docs, but here's a quick outline:
Create a test class inheriting testing::TestWithParam<T>.
Use TEST_P and within it, the GetParam() method to access the parameter value.
You can instantiate your tests with INSTANTIATE_TEST_SUITE_P. Use the testing::Values method to supply values.

Unit Testing in Nim: Is it possible to get the name of a source file at compile time?

I'm planning a project with multiple modules, and I was looking for a nice solution to run all existing unit tests in the project at once. I came up with the following idea: I can run nim --define:testing main.nim and use the following template as a wrapper for all my unit tests.
# located in some general utils module:
template runUnitTests*(code: stmt): stmt =
when defined(testing):
echo "Running units test in ..."
code
This seems to be working well so far.
As a minor tweak, I was wondering if I can actually print out the file name which is calling the runUnitTests template. Is there any reflection mechanism to get the source file name at compile time?
instantiationInfo seems to be what you want: http://nim-lang.org/docs/system.html#instantiationInfo,
template filename: string = instantiationInfo().filename
echo filename()
The template currentSourcePath from the system module returns the path of the current source by using a special compiler built-in, called instantiationInfo.
In your case, you need to print the locations of callers of your template, which means you'll have to use instantiationInfo directly with its default stack index argument of -1 (meaning one position up in the stack, or the caller):
# located in some general utils module:
template runUnitTests*(code: stmt): stmt =
when defined(testing):
echo "Running units test in ", instantiationInfo(-1).filename
code
It's worth mentioning that the Nim compiler itself uses a similar technique with this module, which get automatically imported in all other modules by applying a switch in nim.cfg:

how to organize fixture data and access them from tests in C/C++

How do I compute the path to data fixtures files in test code, given:
test/{main.cpp,one_test.cpp,two_test.cpp}
compilation done in build/
test/fixtures/{conf_1.cfg}
The problem I'm facing is as follows:
/* in test/one_test.cpp */
TEST_CASE( "Config from file", "[config]" ) {
Config conf;
REQUIRE( conf.read(??? + "/conf_1.cfg") )
}
The solution I found so far is to define a macro at configure time:
#define TEST_DIR "/absolute/path/to/test"
which is obtained in my wscript with
def configure(cnf):
# ...
cnf.env.TEST_DIR = cnf.path.get_src().abspath()
cnf.define('TEST_DIR', cnf.env.TEST_DIR)
cnf.write_config_header('include/config.h')
Other attempts included __FILE__ which expanded to ../test/one_test.cpp, but I couldn't use it.
Some background: I'm using the Catch testing framework, with the waf build tool.
Is there is a common practice or pattern, possibly dependent on the testing framework ?
We found this hard to solve at compile/build time as refactoring components (and therefore tests) would move code around. We found two possible solutions:
Put the data into the test. This is only practical if it's short and humanly readable - strings or an easy hex-dump. You could always put the data into a header file if that would make the test easier to maintain.
Specify the location of the data files at the command-line when you run the tests. For this, you may need your own main (See 'Supplying your own main()'

Unit Tests for comparing text files in NUnit

I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
For input A and B, generate the output.
Compare the contents of the generated file to the contents expected output
When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
As for the multiple tests with different data, use the NUnit RowTest extension:
using NUnit.Framework.Extensions;
[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
Line 32 Column 12 - Found 'x' when 'y' was expected
You could optionally enhance that by displaying multiple lines of output
Difference at Line 32 Column 12, first difference shown
A = this is a txst
B = this is a tests
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.