CMake CTest output to JUnit XML - unit-testing

Is there a way to report the results in a junit xml format with CTest?
I have found the the --output-junit comandline switch but running ctest --output-junit testRes.xml doas not create output file...

ctest --output-junit testRes.xml doas not create output file...
This is relatively new feature, You just need to update Your CMake / CTest to v.3.21.4 or higher (ref. https://cmake.org/cmake/help/v3.21/manual/ctest.1.html)

Same issue. I don't examine it deeply. But I guess there is convenient workaround: ask CMake to call test executable with native option aimed to produce JUnit report by itself.
This approach allows you to get as detailed JUnit report as possible. Such report will contain individual log records of each test case being inside called executable file, not whole executable at once. I proceed from the assumption that in general case CMake cann't parse stdout of every test framework at any verbosity level to collect anought data to produce pretty JUnit report.
Moving on to the example, let's say we are dealing with a unit test based on Boost.Test. Then just add it to a CMake project by the following way
add_test(
NAME ${test_name}
COMMAND ${boost_test_executable_file} --logger=JUNIT,message,${path_to_junit_log}
)
and get a JUnit report.

Related

Pass $(location) to Bazel --test_arg

The question
Is it possible to pass a general $(location) to bazel test in the --test_args argument which is re-evaluated for each executed test?
Context
What I am trying to do is use bazel test to bulk execute tests, but produce unique output files.
I am using Catch2 with the --out argument to specify a JUnit XML output file location. I could have Catch2 output the JUnit to STDOUT, but the test.log gets somewhat polluted with test data produced by Bazel, and any other STDOUT, or STDERR that Catch2 produces.
eg. I am trying to produce the following:
./bazel-testlogs/
Folder
Tests
Test1
Test1_JUNIT.xml
Test2
Test2_JUNIT.xml
We ended up patching Catch2 to look for the XML_OUTPUT_FILE environment variable, and use that as the report output path. The inspiration came from this similar post.

How Do I Setup SonarQube cfamil.gcov Correctly?

I cannot get coverage reporting to work within SonarQube. I have a C++ project for which I am using the build-wrapper-linux-x86-64 along with the sonar-scanner. The basic static analysis for the source code seems to work but there is nothing about test code coverage reported within SonarQube.
As part of the same workflow I am using lcov and genhtml to make a unit test coverage report, so I am confident that most of the code coverage steps are being correctly executed. When I manually view the .gcov files I can see run counts in the first column, so there is data there.
I have my code organised into modules. The sonar-project.properties file includes the following:
# List of the module identifiers
sonar.modules=Module1,Module2
# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
# This property is optional if sonar.modules is set.
sonar.sources=./Sources,./Tests
HeliosEmulator.sonar.sources=./Application,./Sources,./Tests
sonar.cfamily.build-wrapper-output=build_output
# Existing reports
sonar.cfamily.build-wrapper-output=build_output
#sonar.cfamily.cppunit.reportsPath=junit
sonar.cfamily.gcov.reportsPath=.
#sonar.cxx.cppcheck.reportPath=cppcheck-result-1.xml
#sonar.cxx.xunit.reportPath=cpputest_*.xml
sonar.junit.reportPaths=junit
I would also like to get the unit test results displayed under the Sonar tools. As I am using the CppUTest framework I do not have an xunit or junit test output at present though. This can be dealt with as a separate issue but as I am unable to found much documentation of how to use the cfamily scanner online I do not know if the tests not being listed is relevant.
I had forgotten to setup my CI system correctly. The .gcov files did not exist for the job that was running the sonar-scanner. They only existed in the testing job that generated the coverage report. No files in the scanner job mean it cannot make a coverage report.
When I set the GitLab CI system I am using to keep the .gcov files as artefacts the coverage reporting suddenly started working.
The .gcov files were generated by a test job and need to be transferred to the sonar-scanner job via the artefact store. This is because GitLab CI does not share a work area between dependent jobs and you have to explicitly say what files must be copied.

Generate test results using xunit in VSO build task for asp.net core app

I have this build :
It works fine. The only issue is that the Test Results are overridden. So I actually end up with the test results for the last test project executed.
This is executed by build engine;
C:\Program Files\dotnet\dotnet.exe test C:/agent/_work/4/s/test/Services.UnitTests/project.json --configuration release -xml ./TEST-tle.xml
C:\Program Files\dotnet\dotnet.exe test C:/agent/_work/4/s/test/Web.UnitTests/project.json --configuration release -xml ./TEST-tle.xml
What could help:
1) having "dotnet test" generate XML output file - did not find a way how to do that
2) Use a variable for -xml output file in Build Task. That variable could be a random string/number or just a project name being tested - like what Build engine feeds to "dotnet.exe test". No way how to do that.
Any ideas? Thanks.
I think that, although you're running the task against all of the projects in one go, as the .Net Core (Preview) task doesn't have a working directory, that the test results are being generated at solution root (or similar) and done for each project in turn.
I set mine up using simple command line tasks...
Tool: dotnet
Arguments: test -xml testresults.xml
Working folder: {insert the folder for the project to test here}
These work fine but I have one set up for each project. You could try creating a task for each library and adding the full path to the test results argument (or name them appropriately as starain suggested).
This feels like a minor bug to me.
Based on my test, it doesn’t recognize the date variable as Build Number.
To deal with this issue, you can add another .Net Core (Test) step to run xunit test with different result file.
For example:

Publishing unit test results from TFS2013 Build to SonarQube

I have created a TFS2013 Build Definition using the template TfvcTemplate.12.xaml
I have specified a test run using VSTestRunner and enabled code coverage.
I am integrating this build with sonar analysis by specifying pre-build and post-test execution script.
Prebuild script arguments: begin /name:PrjName /key:PrjKey /version:1.0 /d:sonar.cs.vstest.reportsPaths="tst*.trx"
I have the "Unit Test Coverage" widget on my sonar dashboard.
It shows Unit Test Coverage %
However, it does not show the unit tests (ie how many tests were run, how many failed ,etc).
I looked in the build output. There is a "tst" folder, however it is empty.
I cannot find the trx files.
I believe that either the trx files are not properly generated or
I am not setting the "sonar.cs.vstest.reportsPaths" correctly.
Please help !!
Relative paths are not well supported: Specify an absolute path wildcard to your *.trx reports. See https://jira.sonarsource.com/browse/SONARMSBRU-100 for details on the bug.
Note that you probably can use the TFS 2013 environment variables to construct this absolute path wildcard: https://msdn.microsoft.com/en-us/library/hh850448.aspx#env_vars

How do you create tests for "make check" with GNU autotools

I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well.
Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project?
To make test run when you issue make check, you need to add them to the TESTS variable
Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this:
TESTS=my-test-executable
It should then be automatically run when you make check, and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable:
TESTS=my-first-test my-second-test my-third-test
and they will all get run.
I'm using Check 0.9.10
configure.ac
Makefile.am
src/Makefile.am
src/foo.c
tests/check_foo.c
tests/Makefile.am
./configure.ac
PKG_CHECK_MODULES([CHECK], [check >= 0.9.10])
./tests/Makefile.am for test codes
TESTS = check_foo
check_PROGRAMS = check_foo
check_foo_SOURCES = check_foo.c $(top_builddir)/src/foo.h
check_foo_CFLAGS = #CHECK_CFLAGS#
and write test code, ./tests/check_foo.c
START_TEST (test_foo)
{
ck_assert( foo() == 0 );
ck_assert_int_eq( foo(), 0);
}
END_TEST
/// And there are some tcase_xxx codes to run this test
Using check you can use timeout and raise signal. it is very helpful.
You seem to be asking 2 questions in the first paragraph.
The first is about adding tests to the GNU autotools toolchain - but those tests, if I'm understanding you correctly, are for both validating that the environment necessary to build your application exists (dependent libraries and tools) as well as adapt the build to the environment (platform specific differences).
The second is about unit testing your C++ application and where to invoke those tests, you've proposed doing so from the autotools tool chain, presumably from the configure script. Doing that isn't conventional though - putting a 'test' target in your Makefile is a more conventional way of executing your test suite. The typical steps for building and installing an application with autotools (at least from a user's perspective, not from your, the developer, perspective) is to run the configure script, then run make, then optionally run make test and finally make install.
For the second issue, not wanting cppunit to be a dependency, why not just distribute it with your c++ application? Can you just put it right in what ever archive format you're using (be it tar.gz, tar.bz2 or .zip) along with your source code. I've used cppunit in the past and was happy with it, having used JUnit and other xUnit style frameworks.
Here is a method without dependencies:
#src/Makefile.am
check_PROGRAMS = test1 test2
test1_SOURCES = test/test1.c code_needed_to_test1.h code_needed_to_test1.c
test2_SOURCES = test/test2.c code_needed_to_test2.h code_needed_to_test2.c
TESTS = $(check_PROGRAMS)
The make check will naturally work and show formatted and summarized output:
$ make check
...
PASS: test1
PASS: test2
============================================================================
Testsuite summary for foo 1.0
============================================================================
# TOTAL: 2
# PASS: 2
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
When you do a make dist nothing from src/test/* will be
in the tarball. Test code is not in the dist, only source will be.
When you do a make distcheck it will run make check and run your tests.
You can use Automake's TESTS to run programs generated with check_PROGRAMS but this will assume that you are using a log driver and a compiler for the output. It is probably easier to still use check_PROGRAMS but to invoke the test suite using a local rule in the Makefile:
check_PROGRAMS=testsuite
testsuite_SOURCES=...
testsuite_CFLAGS=...
testsuite_LDADD=...
check-local:
./testsuite