Running a test for every file in a directory, dynamically - c++

I have an application written in C++, configured with CMake, tested using Catch2, the tests are invoked through CTest.
I have a fairly large list of files that each contain captures of messages that have caused an issue in my application in the past. I currently have a single test that runs through each of these files serially using code that looks approximately like:
TEST_CASE("server type regressions", "[server type]") {
auto const state = do_some_setup();
for (auto const path : files_to_test()) {
INFO(path);
auto parser = state.make_parser(path);
for (auto const message : parser) {
INFO(message);
handle(message);
}
}
}
The message handler has a bunch of internal consistency checks, so when this test fails, it typically does so by throwing an exception.
Is it possible to improve this solution to get / keep the following:
Run the initial do_some_setup once for all of the tests, but then run the test for each file in parallel. do_some_setup is fairly slow, and I have enough files relative to the number of cores that I wouldn't want to have to do setup per file. It would also be acceptable to run do_some_setup more than once, as long as it's better than O(n) in the number of files.
Run the regression test on all the files, even when an earlier file fails. I know I could do this with a try + catch and manually setting a bool has_failed on any failure, but I'd prefer if there were some built-in way to do this?
Be able to specify the file name when invoking tests, so that I can manually run just the test for a single file
Automatically detect the set of files. I would prefer not having to change to a solution where I need to add test files to the test file directory and also update some other location that lists all of the files I'm testing to manually shard them
I'm willing to write some CMake to manage this, pass some special flags to CTest or Catch2, or change to a different unit testing framework.

Related

In-repo addon writing public files on build causes endless build loop on serve

I'm having difficulty with my in-repo addon writing to appDir/public. What I'd like to do is write out a JSON file on each build to be included in the app /dist. The problem I'm running into is when running "ember serve", the file watcher detects the new file and rebuilds again, causing an endless loop.
I've tried writing the JSON file using preBuild() and postBuild() hooks, saving to /public, but after build, the watcher detects it and rebuild over and over, writing a new file again each time. I also tried using my-addon/public folder and writing to that, same thing.
The only thing that partially works is writing on init(), which is fine, except I don't see the changes using ember serve.
I did try using the treeForPublic() method, but did not get any further. I can write the file and use treeForPublic(). This only runs once though, on initial build. It partially solves my problem, because I get the files into app dist folder. But I don't think ember serve will re-run treeForPublic on subsequent file change in the app.
Is there a way to ignore specific files from file watch? Yet still allow files to include into the build? Maybe there's an exclude watch property in ember-cli-build?
Here's my treeForPublic() , but I'm guessing my problems aren't here:
treeForPublic: function() {
const publicTree = this._super.treeForPublic.apply(this, arguments);
const trees = [];
if (publicTree) {
trees.push(publicTree);
}
// this writes out the json
this.saveSettingsFile(this.pubSettingsFile, this.settings);
trees.push(new Funnel(this.addonPubDataPath, {
include: [this.pubSettingsFileName],
destDir: '/data'
}));
return mergeTrees(trees);
},
UPDATE 05/20/2019
I should probably make a new question at this point...
My goal here is to create an auto-increment build number that updates both on ember build and ember serve. My comments under #real_ates's answer below help explain why. In the end, if I can only use this on build, that's totally ok.
The answer from #real_ate was very helpful and solved the endless loop problem, but it doesn't run on ember serve. Maybe this just can't be done, but I'd really like to know either way. I'm currently trying to change environment variables instead of using treeforPublic(). I've asked that as a separate question about addon config() updates to Ember environment:
Updating Ember.js environment variables do not take effect using in-repo addon config() method on ember serve
I don't know if can mark #real_ate's answer as the accepted solution because it doesn't work on ember serve. It was extremely helpful and educational!
This is a great question, and it's often something that people can be a bit confused about when working with broccoli (I know for sure that I've been stung by this in the past)
The issue that you have is that your treeForPublic() is actually writing a file to the source directory and then you're using broccoli-funnel to select that new custom file and include it in the build. The correct method to do this is instead to use broccoli-file-creator to create an output tree that includes your new file. I'll go into more detail with an example below:
treeForPublic: function() {
const publicTree = this._super.treeForPublic.apply(this, arguments);
const trees = [];
if (publicTree) {
trees.push(publicTree);
}
let data = getSettingsData(this.settings);
trees.push(writeFile('/data/the-settings-file.json', JSON.stringify(data)));
return mergeTrees(trees);
}
As you will see the most of the code is exactly the same as your example. The two main differences are that instead of having a function this.saveSettingsFile() that writes out a settings file on disk we now have a function this.getSettingsData() that returns the content that we would like to see in the newly created file. Here is the simple example that we came up with when we were testing this out:
function getSettingsData() {
return {
setting1: 'face',
setting2: 'my',
}
}
you can edit this function to take whatever parameters you need it to and have whatever functionality you would like.
The next major difference is that we are using the writeFile() function which is actually just the broccoli-file-creator plugin. Here is the import that you would put at the top of the file:
let writeFile = require('broccoli-file-creator');
Now when you run your application it won't be writing to the source directory any more which means it will stop constantly reloading 🎉
This question was answered as part of "May I Ask a Question" Season 2 Episode 2. If you would like to see us discuss this answer in full you can check out the video here: https://youtu.be/9kMGMK9Ur4E

how to organize fixture data and access them from tests in C/C++

How do I compute the path to data fixtures files in test code, given:
test/{main.cpp,one_test.cpp,two_test.cpp}
compilation done in build/
test/fixtures/{conf_1.cfg}
The problem I'm facing is as follows:
/* in test/one_test.cpp */
TEST_CASE( "Config from file", "[config]" ) {
Config conf;
REQUIRE( conf.read(??? + "/conf_1.cfg") )
}
The solution I found so far is to define a macro at configure time:
#define TEST_DIR "/absolute/path/to/test"
which is obtained in my wscript with
def configure(cnf):
# ...
cnf.env.TEST_DIR = cnf.path.get_src().abspath()
cnf.define('TEST_DIR', cnf.env.TEST_DIR)
cnf.write_config_header('include/config.h')
Other attempts included __FILE__ which expanded to ../test/one_test.cpp, but I couldn't use it.
Some background: I'm using the Catch testing framework, with the waf build tool.
Is there is a common practice or pattern, possibly dependent on the testing framework ?
We found this hard to solve at compile/build time as refactoring components (and therefore tests) would move code around. We found two possible solutions:
Put the data into the test. This is only practical if it's short and humanly readable - strings or an easy hex-dump. You could always put the data into a header file if that would make the test easier to maintain.
Specify the location of the data files at the command-line when you run the tests. For this, you may need your own main (See 'Supplying your own main()'

cppunit to use command line arguments

I have a CPP unit test which tests a class which is designed to read configuration: we can call this class
Config
The config class has the capacity of doing
Config c;
c.read("/tmp/random-tmp-directory/test.conf");
The random-temp-directory is created by a bash script and should be passed into the test binary.
#!/bin/bash
TEMPDIR=$(mktemp)
cp files/config/test.conf $TEMPDIR/.
./testConfig $(mktemp)/test.conf
The above creates a temp directory, copies our temporary file and passes the path to the test, so it can load the correct file.
Is there a way to tell CPPUNIT to send the commandline arguments, or any arguments to the test registry?
Here is my testConfig.cpp
#include <all the required.h>
CPPUNIT_TEST_SUITE_REGISTRATION(testConfig);
int main(int argc, char ** argv)
{
CPPUNIT_NS::TestResult testresult;
CPPUNIT_NS::TestRunner runner;
CPPUNIT_NS::TestFactoryRegistry &registry = CPPUNIT_NS::TestFactoryRegistry::getRegistry();
// register listener for collecting the test-results
CPPUNIT_NS::TestResultCollector collectedresults;
testresult.addListener(&collectedresults);
runner.addTest(registry.makeTest());
runner.run(testresult);
// Print test in a compiler compatible format.
CppUnit::CompilerOutputter outputter( &collectedresults, std::cerr );
outputter.write();
return collectedresults.wasSuccessful() ? 0 : 1;
}
Consider dividing your code into at least three distinct methods: the part that constructs the config file name, the part that reads the config file, and the part that parses what was read from the config file. You can easily and thoroughly unit test both the file name builder and the parser methods. And as long as you can test simply reading data from the file even one time, you should be golden.
[edit]
For example, you might have a method like string & assembleConfigFileName(string basepath, string randompath, string filename) that takes in the different components of your path and filename, and puts them together. One unit test should look like this:
void TestConfig::assembleConfigFileName_good()
{
string goodBase("/tmp");
string goodPath("1234");
string goodName("test.conf");
string actual(assembleConfigFileName(goodBase, goodPath, goodName));
string expected("/tmp/1234/test.conf");
CPPUNIT_ASSERT_EQUAL(expected, actual);
}
Now you can test that you're building the fully qualified config file name exactly correctly. The test is not trying to read a file. The test is not trying to generate a random number. The test is providing an example of exactly what kinds of input the routine needs to take, and stating exactly what the output should look like given that exact input. And it's proving the code does exactly that.
It's not important for this routine to actually read a config file out of a temp directory. It's only important that it generate the right file name.
Similarly, you build a unit test to test for each possible flow through your code, including error scenarios. Let's say you wrote an exception handler that throws if the random path is wrong. Your unit test will test the exception mechanism:
void TestConfig::assembleConfigFileName_null_path()
{
string goodBase("/tmp");
string nullPath;
string goodName("temp.config");
CPPUNIT_ASSERT_THROW(assembleConfigFileName(goodBase, nullPath, goodName), MissingPathException);
}
The tests are now a document that says exactly how it works, and exactly how it fails. And they prove it every single time you run the tests.
Something you appear to be trying to do is to create a system test, not a unit test. In a unit test, you do NOT want to be passing in randomly pathed config files. You aren't trying to test the external dependencies, that the file system works, that a shell script works, that $TMPDIR works, none of that. You're only trying to test that the logic you've written works.
Testing random files in the operating system is very appropriate for automated system tests, but not for automated unit tests.

How to access project files from NUnit tests

I have some Tests that I run with ReSharpers "Run All Tests from Solution" feature. One of the classes being tested has a dependency on a file in the same folder as the assembly containing it. This file is copied to the output directory via MSBuild (set "Copy To Output Directory" to "Copy always").
Problem: The tests are not being run from the normal assembly output directory, but instead some temporary location in my user profile.
Therefore, I don't really know where to look for the file - the test runner does not copy it there. Can I force it to?
NUnit website recommends in this exact case to use Assembly.CodeBase property, that leads to the bin/debug I needed.
"Note: If you are tempted to disable shadow copy in order to access files in the same directory as your assembly, you should be aware that there are alternatives. Consider using the Assembly.Codebase property rather than Assembly.Location."
The .Location returned Uri style address "file:////D://Projects ... ", so the actual code I used was
string applicationDirectory = new Uri(Path.GetDirectoryName(Assembly.GetExecutingAssembly().CodeBase)).LocalPath;
Sounds like you're running your tests with the Shadow Copy option turned on.
Go to Resharper->Options and select the Unit Testing tab (right at the bottom of the list). Uncheck "Shadow-copy assemblies being tested" and try again.

Unit Tests for comparing text files in NUnit

I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
For input A and B, generate the output.
Compare the contents of the generated file to the contents expected output
When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
As for the multiple tests with different data, use the NUnit RowTest extension:
using NUnit.Framework.Extensions;
[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
Line 32 Column 12 - Found 'x' when 'y' was expected
You could optionally enhance that by displaying multiple lines of output
Difference at Line 32 Column 12, first difference shown
A = this is a txst
B = this is a tests
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.