Pass $(location) to Bazel --test_arg - c++

The question
Is it possible to pass a general $(location) to bazel test in the --test_args argument which is re-evaluated for each executed test?
Context
What I am trying to do is use bazel test to bulk execute tests, but produce unique output files.
I am using Catch2 with the --out argument to specify a JUnit XML output file location. I could have Catch2 output the JUnit to STDOUT, but the test.log gets somewhat polluted with test data produced by Bazel, and any other STDOUT, or STDERR that Catch2 produces.
eg. I am trying to produce the following:
./bazel-testlogs/
Folder
Tests
Test1
Test1_JUNIT.xml
Test2
Test2_JUNIT.xml

We ended up patching Catch2 to look for the XML_OUTPUT_FILE environment variable, and use that as the report output path. The inspiration came from this similar post.

Related

CMake CTest output to JUnit XML

Is there a way to report the results in a junit xml format with CTest?
I have found the the --output-junit comandline switch but running ctest --output-junit testRes.xml doas not create output file...
ctest --output-junit testRes.xml doas not create output file...
This is relatively new feature, You just need to update Your CMake / CTest to v.3.21.4 or higher (ref. https://cmake.org/cmake/help/v3.21/manual/ctest.1.html)
Same issue. I don't examine it deeply. But I guess there is convenient workaround: ask CMake to call test executable with native option aimed to produce JUnit report by itself.
This approach allows you to get as detailed JUnit report as possible. Such report will contain individual log records of each test case being inside called executable file, not whole executable at once. I proceed from the assumption that in general case CMake cann't parse stdout of every test framework at any verbosity level to collect anought data to produce pretty JUnit report.
Moving on to the example, let's say we are dealing with a unit test based on Boost.Test. Then just add it to a CMake project by the following way
add_test(
NAME ${test_name}
COMMAND ${boost_test_executable_file} --logger=JUNIT,message,${path_to_junit_log}
)
and get a JUnit report.

Testing generated Go code without co-locating tests

Have some auto-generated golang code for protobuf messages and I'm looking to add some additional testing, without locating the file under the same directory path. This is to allow easy removal of the existing generated code to be sure that if a file is dropped from being generated, it's not left included in the codebase by accident.
The current layout of these files are controlled by prototool so I have something like the following:
/pkg/<other1>
/pkg/<other2>
/pkg/<name-generated>/v1/component_api.pb.go
/pkg/<name-generated>/v1/component_api.pb.gw.go
/pkg/<name-generated>/v1/component_api.pb.validate.go
The *.validate.go comes from envoyproxy/protoc-gen-validate, and *.pb.go & *.pb.gw.go are coming from protobuf and grpc libraries. The other1 and other2 are two helper libraries that we have included along with the generated code to make it easier for client side apps. The server side is in a separate repo and imports as needed.
Because it's useful to be able to delete /pkg/<name> before re-running prototool I've placed some tests of component_api (mostly to exercise the validate rules generated automatically) under the path:
/internal/pkg/<name>/v1/component_api_test.go
While this works for go test -v ./..., it appears not to work to well when generating coverage with -coverpkg.
go test -coverpkg=./... -coverprofile=coverage/go/coverage.out -v ./...
go build <pkgname>/internal/pkg/<name>/v1: no non-test Go files in ....
<output from the tests in /internal/pkg/<name>/v1/component_api_test.go>
....
....
coverage: 10.5% of statements in ./...
ok <pkgname>/internal/pkg/<name>/v1 0.014s coverage: 10.5% of statements in ./...
FAIL <pkgname>/pkg/other1 [build failed]
FAIL <pkgname>/pkg/other2 [build failed]
? <pkgname>/pkg/<name>/v1 [no test files]
FAIL
Coverage tests failed
Generated coverage/go/html/main.html
The reason for use of -coverpkg is that without it there doesn't seem to be anything spotting that any of the code under <pkgname>/pkg/<name>/v1 is covered, and we've see issues with what it reports previously not showing the real level of coverage, which are solved by use of -coverpkg:
go test -cover -coverprofile=coverage/go/coverage.out ./...
ok <pkgname>internal/pkg/<name>/v1 0.007s coverage: [no statements]
ok <pkgname>/pkg/other1 0.005s coverage: 100.0% of statements
ok <pkgname>/pkg/other2 0.177s coverage: 100.0% of statements
? <pkgname>/pkg/<name>/v1 [no test files]
Looking at the resulting coverage/go/coverage.out includes no mention of anything under <pkgname>/pkg/<name>/v1 being exercised.
I'm not attached to the current layout beyond being limited on <pkgname>/pkg/<name>/v1 being automatically managed by prototool and it's rules around naming for the generated files. Would like to ensure the other modules we have can remain exported to be used as helper libraries and I would like to be able to add tests for <pkgname>/pkg/<name>/v1 without needing to locate them in the same directory to allow for easy delete + recreate of generated files, while still getting sensible coverage reports.
I've tried fiddling with the packages passed to -coverpkg and replacing ./... on the command-line and haven't been able to come up with something that works. Perhaps I'm just not familiar with the right invocation?
Other than that is there a different layout that will take care of this for me?
To handle this scenario, simply create a doc.go file in the same directory as the dis-located tests with just the package and comment. This will allow the standard arguments to work and golang appears to be reasonably happy with an empty file.
Once in place the following will work as expected.
go test -coverpkg=./... -coverprofile=coverage/go/coverage.out -v ./...
Idea based on suggestion in https://stackoverflow.com/a/47025370/1597808

Publishing unit test results from TFS2013 Build to SonarQube

I have created a TFS2013 Build Definition using the template TfvcTemplate.12.xaml
I have specified a test run using VSTestRunner and enabled code coverage.
I am integrating this build with sonar analysis by specifying pre-build and post-test execution script.
Prebuild script arguments: begin /name:PrjName /key:PrjKey /version:1.0 /d:sonar.cs.vstest.reportsPaths="tst*.trx"
I have the "Unit Test Coverage" widget on my sonar dashboard.
It shows Unit Test Coverage %
However, it does not show the unit tests (ie how many tests were run, how many failed ,etc).
I looked in the build output. There is a "tst" folder, however it is empty.
I cannot find the trx files.
I believe that either the trx files are not properly generated or
I am not setting the "sonar.cs.vstest.reportsPaths" correctly.
Please help !!
Relative paths are not well supported: Specify an absolute path wildcard to your *.trx reports. See https://jira.sonarsource.com/browse/SONARMSBRU-100 for details on the bug.
Note that you probably can use the TFS 2013 environment variables to construct this absolute path wildcard: https://msdn.microsoft.com/en-us/library/hh850448.aspx#env_vars

How to make MATLAB xUnit work on MATLAB R2008b (7.7)?

I copied the matlab_xunit folder to C:\Program Files, and included it (and its subfolders) on the MATLAB path. Now MATLAB recognizes new commands such as
runtests
But this command does not find any tests on the current folder. What I have done wrong? What else can I do?
>> runtests
Starting test run with 0 test cases.
PASSED in 0.000 seconds.
I am the creator of MATLAB xUnit. The most likely explanation for what you are seeing is some problem in the test files. Can you post a sample test file so I can look at it?
If you are writing subfunction-style test files, do any files in your current directory start with "test" or "Test"? Does the file contain any subfunctions that begin with "test" or "Test"? When you call one of those files with no input arguments and a single output argument, does it return a TestSuite object? If not, then double-check the documentation about creating subfunction tests.
Are you instead writing test files that subclass TestCase? Do they contain methods that begin with "test" or "Test"?
This document on the File Exchange page for the MATLAB xUnit Test Framework submission should help. It says that you have to create a folder with your test-case M-files in it, then make that your working directory using CD.

How do you create tests for "make check" with GNU autotools

I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well.
Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project?
To make test run when you issue make check, you need to add them to the TESTS variable
Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this:
TESTS=my-test-executable
It should then be automatically run when you make check, and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable:
TESTS=my-first-test my-second-test my-third-test
and they will all get run.
I'm using Check 0.9.10
configure.ac
Makefile.am
src/Makefile.am
src/foo.c
tests/check_foo.c
tests/Makefile.am
./configure.ac
PKG_CHECK_MODULES([CHECK], [check >= 0.9.10])
./tests/Makefile.am for test codes
TESTS = check_foo
check_PROGRAMS = check_foo
check_foo_SOURCES = check_foo.c $(top_builddir)/src/foo.h
check_foo_CFLAGS = #CHECK_CFLAGS#
and write test code, ./tests/check_foo.c
START_TEST (test_foo)
{
ck_assert( foo() == 0 );
ck_assert_int_eq( foo(), 0);
}
END_TEST
/// And there are some tcase_xxx codes to run this test
Using check you can use timeout and raise signal. it is very helpful.
You seem to be asking 2 questions in the first paragraph.
The first is about adding tests to the GNU autotools toolchain - but those tests, if I'm understanding you correctly, are for both validating that the environment necessary to build your application exists (dependent libraries and tools) as well as adapt the build to the environment (platform specific differences).
The second is about unit testing your C++ application and where to invoke those tests, you've proposed doing so from the autotools tool chain, presumably from the configure script. Doing that isn't conventional though - putting a 'test' target in your Makefile is a more conventional way of executing your test suite. The typical steps for building and installing an application with autotools (at least from a user's perspective, not from your, the developer, perspective) is to run the configure script, then run make, then optionally run make test and finally make install.
For the second issue, not wanting cppunit to be a dependency, why not just distribute it with your c++ application? Can you just put it right in what ever archive format you're using (be it tar.gz, tar.bz2 or .zip) along with your source code. I've used cppunit in the past and was happy with it, having used JUnit and other xUnit style frameworks.
Here is a method without dependencies:
#src/Makefile.am
check_PROGRAMS = test1 test2
test1_SOURCES = test/test1.c code_needed_to_test1.h code_needed_to_test1.c
test2_SOURCES = test/test2.c code_needed_to_test2.h code_needed_to_test2.c
TESTS = $(check_PROGRAMS)
The make check will naturally work and show formatted and summarized output:
$ make check
...
PASS: test1
PASS: test2
============================================================================
Testsuite summary for foo 1.0
============================================================================
# TOTAL: 2
# PASS: 2
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
When you do a make dist nothing from src/test/* will be
in the tarball. Test code is not in the dist, only source will be.
When you do a make distcheck it will run make check and run your tests.
You can use Automake's TESTS to run programs generated with check_PROGRAMS but this will assume that you are using a log driver and a compiler for the output. It is probably easier to still use check_PROGRAMS but to invoke the test suite using a local rule in the Makefile:
check_PROGRAMS=testsuite
testsuite_SOURCES=...
testsuite_CFLAGS=...
testsuite_LDADD=...
check-local:
./testsuite