disable gcov coverage at run-time - c++

I'm writing some Test in C++ and I'm using gcov (actually lcov but I think it's beside the point) to get informations about coverage.
Is there any way to disable the information record at run-time?
E.G. :
bool myTest() {
ObjectToTest obj;
/* Enable gcov... */
obj.FunctionToTest();
/* ...Disable gcov */
if(obj.GetStatus() != WHATEVER)
return false;
else
return true;
}
In this case I would like gcov to display as "covered" just FunctionToTest but leave ObjectToTest constructor and GetStatus "uncovered".
Thanks in advance!

No, in case of gcov we don't have any such option.
I have seen such options in some coverage tools like clover, which works by instrumenting source code directly though.
Beside a solution to your problem will be to write that part of code into a different source file and then call it inside your desired source file by including it.
I am suggesting this because when you generate coverage report later using LCOV or GCOVR they both provide the option to exclude specified files from coverage report by passing them to certain switches.
LCOV:
-r tracefile pattern
--remove tracefile pattern
Remove data from tracefile.
GCOVR:
-e EXCLUDE, --exclude=EXCLUDE
Exclude data files that match this regular expression

Although I agree with what #VikasTawniya said, you can also mock the functions you don't like to track in your test code.
#ifdev NO_COV
#include mock.h // mock of obj.FunctionToTest(); does nothing
#else
#include real.h // real implementation of obj.FunctionToTest();
#endif
Now your coverage result is not spoiled with the call of obj.FunctionToTest()

Feasible solution now (2023),
void some_function() {
__gcov_reset(); // this will reset all profile counters
do_something();
__gcov_flush(); // this will flush all counters (note: this flush is incremental and will not overwrite existing profile data)
}
// To prevent other parts of profile data from being flushed
void __attribute__((destructor)) clear_redundant_gcov() {
__gcov_reset();
}
Additional explanation: when you compile your source code with gcov, gcc will insert a lot of API calls (such as __gcov_inc_counter(xxx), just an example), and it is possible to invoke these gcov API calls within your source code.

Related

Allow test result to depend upon installation of external tool

I am creating a Golang project, and doing a pretty good job of adding tests in step with the features. The general idea is that I perform a "semantic diff" between files and git branches. For the newest feature, the behavior depends upon whether an external tool (tree-sitter-cli) is installed, and upon which extra capabilities are installed.
If the external tool (or its optional grammars) are not installed, I expect different results from my internal function. However, both results are consistent with my tool (sdt) itself being correct. For example, here is a test I have modified from another programming language analyzed without tree-sitter:
func TestNoSemanticDiff(t *testing.T) {
// Narrow the options to these test files
opts := options
opts.Source = file0.name
opts.Destination = file1.name
report := treesitter.Diff("", opts, config)
if !strings.Contains(report, "| No semantic differences detected") {
t.Fatalf("Failed to recognize semantic equivalence of %s and %s",
opts.Source, opts.Destination)
}
}
The tests for various supported languages are mostly the same in form. In this case, however, receiving the "report" of "| No available semantic analyzer for this format" would also be a correct value if the external tool is missing (or did not include that capability).
The thing is, I can run code to determine which value is expected; but that code would ideally be "run once at top of test" rather than re-check within each of many similar tests.
So what I'd ideally want is to first run some sort of setup/scaffolding, then have individual tests written something like:
if !treeSitterInstalled { // checked once at start of test file
if !strings.Contains(report, "appropriate response") { ... }
} else if treeSitterNoLangSupport { // ditto, configured once
if !strings.Contains(report, "this other thing") { ... }
} else {
if !string.Contains(report, "the stuff where it's installed") { ... }
}
In your test file you could simply use the init function to do the checking for the external dependency and set a variable inside that very same file, which can then be checked inside individual tests.

Running a test for every file in a directory, dynamically

I have an application written in C++, configured with CMake, tested using Catch2, the tests are invoked through CTest.
I have a fairly large list of files that each contain captures of messages that have caused an issue in my application in the past. I currently have a single test that runs through each of these files serially using code that looks approximately like:
TEST_CASE("server type regressions", "[server type]") {
auto const state = do_some_setup();
for (auto const path : files_to_test()) {
INFO(path);
auto parser = state.make_parser(path);
for (auto const message : parser) {
INFO(message);
handle(message);
}
}
}
The message handler has a bunch of internal consistency checks, so when this test fails, it typically does so by throwing an exception.
Is it possible to improve this solution to get / keep the following:
Run the initial do_some_setup once for all of the tests, but then run the test for each file in parallel. do_some_setup is fairly slow, and I have enough files relative to the number of cores that I wouldn't want to have to do setup per file. It would also be acceptable to run do_some_setup more than once, as long as it's better than O(n) in the number of files.
Run the regression test on all the files, even when an earlier file fails. I know I could do this with a try + catch and manually setting a bool has_failed on any failure, but I'd prefer if there were some built-in way to do this?
Be able to specify the file name when invoking tests, so that I can manually run just the test for a single file
Automatically detect the set of files. I would prefer not having to change to a solution where I need to add test files to the test file directory and also update some other location that lists all of the files I'm testing to manually shard them
I'm willing to write some CMake to manage this, pass some special flags to CTest or Catch2, or change to a different unit testing framework.

How would you auto generate a class implementation during pre-compile?

I was looking at creating a service locator that I would also like to provide "NULL" implementations of the services it provides. How would you go about parsing the Interface and auto generating an implementation? What tools would you use - say with CMake? Basically, I am trying to avoid having to write something like:
class IAudio
{
virtual void playSound(SoundId Id) = 0;
....
};
***** Following is the boilerplate I would like to avoid *****
class NullAudio : IAudio
{
void playSound(SoundId) override { /* Does Nothing */ }
....
};
I can't seem to find any examples from my google searches for automatic code generation - they all turn up things that reference code completion in editors. I would even consider running a python script that looks for files starting with I - like IAudio.hpp, parses it and writes out a file if that is the common way to do this.
Thanks!
I think I figured it out. Write a small tool to do what I want, configure CMake to compile it first, and then have CMake run the tool with add_custom_command.

how to organize fixture data and access them from tests in C/C++

How do I compute the path to data fixtures files in test code, given:
test/{main.cpp,one_test.cpp,two_test.cpp}
compilation done in build/
test/fixtures/{conf_1.cfg}
The problem I'm facing is as follows:
/* in test/one_test.cpp */
TEST_CASE( "Config from file", "[config]" ) {
Config conf;
REQUIRE( conf.read(??? + "/conf_1.cfg") )
}
The solution I found so far is to define a macro at configure time:
#define TEST_DIR "/absolute/path/to/test"
which is obtained in my wscript with
def configure(cnf):
# ...
cnf.env.TEST_DIR = cnf.path.get_src().abspath()
cnf.define('TEST_DIR', cnf.env.TEST_DIR)
cnf.write_config_header('include/config.h')
Other attempts included __FILE__ which expanded to ../test/one_test.cpp, but I couldn't use it.
Some background: I'm using the Catch testing framework, with the waf build tool.
Is there is a common practice or pattern, possibly dependent on the testing framework ?
We found this hard to solve at compile/build time as refactoring components (and therefore tests) would move code around. We found two possible solutions:
Put the data into the test. This is only practical if it's short and humanly readable - strings or an easy hex-dump. You could always put the data into a header file if that would make the test easier to maintain.
Specify the location of the data files at the command-line when you run the tests. For this, you may need your own main (See 'Supplying your own main()'

Unit Tests for comparing text files in NUnit

I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
For input A and B, generate the output.
Compare the contents of the generated file to the contents expected output
When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
As for the multiple tests with different data, use the NUnit RowTest extension:
using NUnit.Framework.Extensions;
[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
Line 32 Column 12 - Found 'x' when 'y' was expected
You could optionally enhance that by displaying multiple lines of output
Difference at Line 32 Column 12, first difference shown
A = this is a txst
B = this is a tests
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.