How Do I Setup SonarQube cfamil.gcov Correctly? - c++

I cannot get coverage reporting to work within SonarQube. I have a C++ project for which I am using the build-wrapper-linux-x86-64 along with the sonar-scanner. The basic static analysis for the source code seems to work but there is nothing about test code coverage reported within SonarQube.
As part of the same workflow I am using lcov and genhtml to make a unit test coverage report, so I am confident that most of the code coverage steps are being correctly executed. When I manually view the .gcov files I can see run counts in the first column, so there is data there.
I have my code organised into modules. The sonar-project.properties file includes the following:
# List of the module identifiers
sonar.modules=Module1,Module2
# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
# This property is optional if sonar.modules is set.
sonar.sources=./Sources,./Tests
HeliosEmulator.sonar.sources=./Application,./Sources,./Tests
sonar.cfamily.build-wrapper-output=build_output
# Existing reports
sonar.cfamily.build-wrapper-output=build_output
#sonar.cfamily.cppunit.reportsPath=junit
sonar.cfamily.gcov.reportsPath=.
#sonar.cxx.cppcheck.reportPath=cppcheck-result-1.xml
#sonar.cxx.xunit.reportPath=cpputest_*.xml
sonar.junit.reportPaths=junit
I would also like to get the unit test results displayed under the Sonar tools. As I am using the CppUTest framework I do not have an xunit or junit test output at present though. This can be dealt with as a separate issue but as I am unable to found much documentation of how to use the cfamily scanner online I do not know if the tests not being listed is relevant.

I had forgotten to setup my CI system correctly. The .gcov files did not exist for the job that was running the sonar-scanner. They only existed in the testing job that generated the coverage report. No files in the scanner job mean it cannot make a coverage report.
When I set the GitLab CI system I am using to keep the .gcov files as artefacts the coverage reporting suddenly started working.
The .gcov files were generated by a test job and need to be transferred to the sonar-scanner job via the artefact store. This is because GitLab CI does not share a work area between dependent jobs and you have to explicitly say what files must be copied.

Related

How to set path at test discovery in visual studio?

How can I set the path to my external binaries during test discovery in visual studio's Test Explorer? After that how to make sure, it uses the correct paths?
I use windows 10 and VS 2019. I have a solution that builds some binaries and some tests into different folders. Also, I have some 3rd party dependencies, each in its own folder.
Something like:
solutionDir/
-ownBinaries/
-testBinaries/
-externalBinaries/
I'd like to use the Test Explorer to run my tests. For this purpose, I use a .runsettings file. I installed Google Test adapter via NuGet (later it will run on CI, so this is the only option). The automatic runsetting discovery is disabled, and this file is selected as the runsettings file. It overrides the workingDir to my ownBinaries folder, and extend the PATH enviroment variable with the externalBinaries. The relevant parts are:
<SolutionSettings>
<Settings>
<AdditionalTestExecutionParam>-testdirectory=$(SolutionDir)</AdditionalTestExecutionParam>
<WorkingDir>$(SolutionDir)ownBinaries</WorkingDir>
<PathExtension>$(SolutionDir)externalBinaries</PathExtension>
</Settings>
</SolutionSettings>
This is works fine, after my tests are discovered, but I have problems when it tries to discover my tests.
I use google test and c++, so the test discovery tries to run those tests with the --gtest-list-tests argument, then populate the view with the test name, case, etc. The binaries are just fine, builds without error, I can run them from the debugger, and they produce the output I want.
But the test explorer won't show them, because it doesn't set the externalBinaries path.
This is what lead me to this situation.
First I copied every binaries next to my test exe, namely into the testBinaries folder. Then, I could run it in the cmd with the --gtest-list-tests argument. Everything was fine, all my test names showed up. Started VS, and Test Explorer discovered all my tests, it was able to run them.
Then I done a clean build, so the external stuff deleted from the testBin folder. The Test Explorer cached the test names, so it was able to run them.
Restart VS. Test Explorer tries to discover my tests. but it fails whit this helping message: (removed date+time)
Google Test Adapter: Test discovery starting...
Failed to run test executable 'D:\MySolution\testBinaries\SBCUnitTest.exe': One or more errors occurred.
Check out Google Test Adapter's trouble shooting section at https://github.com/csoltenborn/GoogleTestAdapter#trouble_shooting
In particular: launch command prompt, change into directory '..\ownBinaries', and execute the following command to make sure your tests can be run in general.
D:\MySolution\testBinaries\SBCUnitTest.exe --gtest_list_tests -testdirectory=
Found 0 tests in executable D:\MySolution\testBinaries\SBCUnitTest.exe
Test discovery completed, overall duration: 00:00:00.3022924
Have you noticed that -testDirectory= is empty despite it is set in the runsettings file?
I'm completely lost how I can proceed with it. This workaround is quite heavy to copy all files, then delete all but the test binaries each time when I start VS.
Here is the link for the Troubleshooting section mentioned in the error message.
I've read through the readme file on github, also the runsetting docs on Microsoft's website.
Edit
I made progress with the VsTest.console.exe, I can successfully run all my tests with the proper arguments as below:
& "VSTest.console.exe" *_uTest.exe /Settings:..\MySolution.gta.runsettings /TestAdapterPath:"..\packages\GoogleTestAdapter.0.18.0\build\_common\"
I use the same *.runsettings and *.gta_settings_helper files. Those files are used to get absolute paths for the dependencies. I could run this from different folders, but then I had to adjust the arguments (test discovery pattern, relative path to runsettings, and relative path to GTA).
Great news, that it successfully runs on Azure (it uses vstest.console).
Edit 2
Tried to merge the workingDir and pathExtension nodes, so only one needed (the pathExtension). No success.
Tried to install Test adapter for google test in the VS installer, delete the runsetting file, and set the properties in VS->Tools->Options then Test adapter for google test. Even the example pathExtension didn't worked for me.
Found the extended logs under %AppData%/Local/Temp/TestAdapter/someNumber/*.txt and in that log I've found one line as the runsettings file. I paste here the formatted version of the log
<RunSettings>
<GoogleTestAdapterSettings>
<SolutionSettings>
<Settings>
<WorkingDir>$(SolutionDir)</WorkingDir>
<PathExtension>$(SolutionDir)externalBinaries</PathExtension>
</Settings>
</SolutionSettings>
<ProjectSettings>
</ProjectSettings>
<GoogleTestAdapterSettings>
<SolutionSettings>
<Settings>
</Settings>
</SolutionSettings>
<ProjectSettings>
</ProjectSettings>
</GoogleTestAdapterSettings>
</GoogleTestAdapterSettings>
</RunSettings>
Does anybody know why is there an empty google test adapter setting? Where does it comes from? I think this is overwrites my settings.
It turned out, before first run the relative paths are not known.
Trivial solution
Add the full path to the PATH Extension under Visual Studio -> Options -> Test Adapter for Google Test settings. Meanwhile the custom *.runsetting file is not selected.
Using this method all my tests are discoverable, but it is a manual setting for each repo cloned.

Testing generated Go code without co-locating tests

Have some auto-generated golang code for protobuf messages and I'm looking to add some additional testing, without locating the file under the same directory path. This is to allow easy removal of the existing generated code to be sure that if a file is dropped from being generated, it's not left included in the codebase by accident.
The current layout of these files are controlled by prototool so I have something like the following:
/pkg/<other1>
/pkg/<other2>
/pkg/<name-generated>/v1/component_api.pb.go
/pkg/<name-generated>/v1/component_api.pb.gw.go
/pkg/<name-generated>/v1/component_api.pb.validate.go
The *.validate.go comes from envoyproxy/protoc-gen-validate, and *.pb.go & *.pb.gw.go are coming from protobuf and grpc libraries. The other1 and other2 are two helper libraries that we have included along with the generated code to make it easier for client side apps. The server side is in a separate repo and imports as needed.
Because it's useful to be able to delete /pkg/<name> before re-running prototool I've placed some tests of component_api (mostly to exercise the validate rules generated automatically) under the path:
/internal/pkg/<name>/v1/component_api_test.go
While this works for go test -v ./..., it appears not to work to well when generating coverage with -coverpkg.
go test -coverpkg=./... -coverprofile=coverage/go/coverage.out -v ./...
go build <pkgname>/internal/pkg/<name>/v1: no non-test Go files in ....
<output from the tests in /internal/pkg/<name>/v1/component_api_test.go>
....
....
coverage: 10.5% of statements in ./...
ok <pkgname>/internal/pkg/<name>/v1 0.014s coverage: 10.5% of statements in ./...
FAIL <pkgname>/pkg/other1 [build failed]
FAIL <pkgname>/pkg/other2 [build failed]
? <pkgname>/pkg/<name>/v1 [no test files]
FAIL
Coverage tests failed
Generated coverage/go/html/main.html
The reason for use of -coverpkg is that without it there doesn't seem to be anything spotting that any of the code under <pkgname>/pkg/<name>/v1 is covered, and we've see issues with what it reports previously not showing the real level of coverage, which are solved by use of -coverpkg:
go test -cover -coverprofile=coverage/go/coverage.out ./...
ok <pkgname>internal/pkg/<name>/v1 0.007s coverage: [no statements]
ok <pkgname>/pkg/other1 0.005s coverage: 100.0% of statements
ok <pkgname>/pkg/other2 0.177s coverage: 100.0% of statements
? <pkgname>/pkg/<name>/v1 [no test files]
Looking at the resulting coverage/go/coverage.out includes no mention of anything under <pkgname>/pkg/<name>/v1 being exercised.
I'm not attached to the current layout beyond being limited on <pkgname>/pkg/<name>/v1 being automatically managed by prototool and it's rules around naming for the generated files. Would like to ensure the other modules we have can remain exported to be used as helper libraries and I would like to be able to add tests for <pkgname>/pkg/<name>/v1 without needing to locate them in the same directory to allow for easy delete + recreate of generated files, while still getting sensible coverage reports.
I've tried fiddling with the packages passed to -coverpkg and replacing ./... on the command-line and haven't been able to come up with something that works. Perhaps I'm just not familiar with the right invocation?
Other than that is there a different layout that will take care of this for me?
To handle this scenario, simply create a doc.go file in the same directory as the dis-located tests with just the package and comment. This will allow the standard arguments to work and golang appears to be reasonably happy with an empty file.
Once in place the following will work as expected.
go test -coverpkg=./... -coverprofile=coverage/go/coverage.out -v ./...
Idea based on suggestion in https://stackoverflow.com/a/47025370/1597808

Show code coverage with a source code in Jenkins wiht Cobertura (run result from other machine)

Background
I have large c++ application with complex directory structure. Structure is so deep that code repository can't be stored in Jenkins workspace, but is some root directory, otherwise build fails since path length limit is busted.
Now since application is tested in different environments, test application is run in diffrent machine. Application and all resources are compressed and copied to test machine where tests are run using OpenCppCoverage and as a result Cobertura xml is produced.
Now since source code is needed to show covarage result xml is copied back to build machine and then feed to Jenkins Cobertura plugin.
Problem
Coverage reports shows only percent results for module or source code. Code content is not show, but this error message is show:
Source
Source code is unavailable. Some possible reasons are:
This is not the most recent build (to save on disk space, this plugin only keeps the most recent build’s source code).
Cobertura found the source code but did not provide enough information to locate the source code.
Cobertura could not find the source code, so this plugin has no hope of finding it.
You do not have sufficient permissions to view this file.
Now I've found this SO answear which is promising:
The output xml file has to be in the same folder as where coverage
is run, so:
coverage xml -o coverage.xml
The reference to the source folder is put into coverage.xml and if
the output file is put into another folder, the reference to the
source folder will be incorrect.
Problem is that:
I've run tests on different machine (this can be overcome by script which modifies paths in xml).
my source code can't be inside a workspace during a build time
placing xml in respective directory of source code is not accepted by Cobertura plugin. It ends with this error:
[Cobertura] Publishing Cobertura coverage report...
FATAL: Unable to find coverage results
java.io.IOException: Expecting Ant GLOB pattern, but saw 'C:/build_coverage/Products/MyMagicProduct/Src/test/*Coverage.xml'. See http://ant.apache.org/manual/Types/fileset.html for syntax
This is part of xml result (before modifications):
<?xml version="1.0" encoding="utf-8"?>
<coverage line-rate="0.63669186741173223" branch-rate="0" complexity="0" branches-covered="0" branches-valid="0" timestamp="0" lines-covered="122029" lines-valid="191661" version="0">
<sources>
<source>c:</source>
<source>C:</source>
</sources>
<packages>
<package name="C:\jenkins\workspace\MMP_coverage\MyMagicProduct\src\x64\Debug\MMPServer.exe" line-rate="0.63040511358728513" branch-rate="0" complexity="0">
<classes>
<class name="AuditHandler.cpp" filename="build_coverage\Products\MyMagicProduct\Src\Common\AuditHandler.cpp" line-rate="0.92682926829268297" branch-rate="0" complexity="0">
<methods/>
<lines>
<line number="18" hits="1"/>
<line number="19" hits="1"/>
<line number="23" hits="1"/>
<line number="25" hits="1"/>
<line number="27" hits="1"/>
....
</lines>
</class>
....
The biggest issue is that I'm not sure if location of xml is actual a problem since plugin doesn't report details of the issues encountered when trying to fetch/find respective source code. Second bullet from Cobertura which may explain problem is totally confusing:
Cobertura found the source code but did not provide enough information to locate the source code.
What else I've tried
I've ensured that anyone can read source code (to avoid problem with access)
I've modified xml so filename contains path relative to: jenkins workspace, path where xml file with coverity report is located
copied my source code to various locations, even containing "cobertura" directory since something like this I've found in plugin source code
I've tried understand the issue by inspecting source code.
I've found some (a bit old) github project which maybe a hint howto fix it - currently I'm trying to understudy what it exactly does (I don't want to import this project to my build structure).
So far no luck.
Update:
Suddenly (I'm not sure what I have done) it works for my account. Problem is that it works only for me all other users have same issue. This clearly indicate that issue must be a security.
I encountered a very similar issue when I had to develop a CI pipeline for a very huge C++ client. I had the best results if I avoided the Cobertura Plugin and instead used the HTML Publisher Plugin. The main issue I had was also finding the source files.
Convert OpenCppCoverage result to HTML
This step is quite easy. You have to add the parameter --export_type=html:<outputPath> (see Commandline-reference) to the OpenCppCoverage call.
mkdir CodeCoverage
OpenCppCoverage.exe --export_type=html:CodeCoverage <GoogleTest.exe>
The commands above should result in a html-file in the directory <jenkins_workspace>/CodeCoverage/index.html
Publish the OpenCppCoverage result
To do this we use the HTML Publisher Plugin as I mentioned above. reportDir is the directory created in step one and which contains our html-file. Its path is relative to the Jenkins workspace.
publishHTML target: [
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'CodeCoverage',
reportFiles: 'index.html',
reportName: 'Code Coverage'
]
and to be sure that everyone can download and check the result locally we archieve the result of OpenCppCoverage:
archiveArtifacts artifacts: 'CodeCoverage/*.*'
You can see the result now in the sidebar of your pipeline under Code Coverage and the result will look like the following:
This is the solution that worked for me.
I hope this helps at least a bit. I can only advice do avoid the Cobertura Plugin. I wasted so much time try to fix it and recognize my sources...
Ok I've found reasons why I had a problems with this plugin.
xml from openCppCoverage is just correct. No changes are needed here to make it work (as far as sources are there where pdb file points to). Sources outside Jenkins workspace are not the problem here. When I copied executable from build machine to test machine, then run tests with openCppCoverage and copied result back to build machine it is just fine.
In job configuration any user which supposed to view code coverage has to have access to Job/workspace in security section. In my case I've enabled this for all logged in users. This covers last bullet point of error message.
Most important thing: build must be successful. I mean form beginning to the end. Doesn't meter if step containing call to cobertura plugin was successful. If any step (even in the future step) fails then cobertura will not show code for this coverage run. In my case build job was failing since one of tests was timing out. This was caused by openCppCoverage overhead which slows down tests by factor 3. My script was detecting timeout and killing one of tests.
I discovered that not successful build was a problem by accident. During experiments I noticed two cases when cobertura has shown source code:
I've rerun job and removed all steps but one responsible for publishing coloratura results
I run whole job such way it run a single test case which passed
Not sowing coverage if build is not successful is reasonable (if test failed then most probably wrong branch of code has been taken), but UI should indicate that in different way.
Conclusion
This is great example how it is important to report errors to user with precise details what went wrong and why. I wasted at least whole weak to figure out what is actually wrong which bullet point of error message is actually my case. In fact error message from plugin doesn't cover all reasons of not showing the code.
I will file report that plugin should give better explanation what went wrong.

WebStorm run all dart unit tests

In WebStorm 11 I want to create a run configuration which runs all dart tests in my project.
However there is no option to do this in the "Dart Test" configuration template. The only options are:
Test Kind: All in file, Test group, single test
Test file: must point to a .dart file, otherwise I get "Dart file is not found"
VM Options (text input)
If I point WebStorm to a single test file this command gets executed in the test window:
C:\path\to\dart\bin\dart.exe --ignore-unrecognized-flags --checked --trace_service_pause_events file:\\\C:\path\to\dart\bin\snapshots\pub.dart.snapshot run test:test -r json C:/path/to/project/test/someclass_test.dart
I don't want to create a run configuration for every unit test class I write, there must be a better way.
Currently I prefer to navigate to the project directory and just run
pub run test:test
This runs all tests which live in files ending with _test.dart which is perfectly what I want. More info here: https://github.com/dart-lang/test#running-tests
Is there no such option in WebStorm for dart developers?
Accordingly to WEB-14747 ticket this functionality is already implemented for the next major version.
You can try latest EAP build of WebStorm v12 here.
I guess that's currently not supported.
The feature to run tests this way is quite new anyway.
If you think this feature is important, lease create a feature request in https://youtrack.jetbrains.com/issues/WEB

Publishing unit test results from TFS2013 Build to SonarQube

I have created a TFS2013 Build Definition using the template TfvcTemplate.12.xaml
I have specified a test run using VSTestRunner and enabled code coverage.
I am integrating this build with sonar analysis by specifying pre-build and post-test execution script.
Prebuild script arguments: begin /name:PrjName /key:PrjKey /version:1.0 /d:sonar.cs.vstest.reportsPaths="tst*.trx"
I have the "Unit Test Coverage" widget on my sonar dashboard.
It shows Unit Test Coverage %
However, it does not show the unit tests (ie how many tests were run, how many failed ,etc).
I looked in the build output. There is a "tst" folder, however it is empty.
I cannot find the trx files.
I believe that either the trx files are not properly generated or
I am not setting the "sonar.cs.vstest.reportsPaths" correctly.
Please help !!
Relative paths are not well supported: Specify an absolute path wildcard to your *.trx reports. See https://jira.sonarsource.com/browse/SONARMSBRU-100 for details on the bug.
Note that you probably can use the TFS 2013 environment variables to construct this absolute path wildcard: https://msdn.microsoft.com/en-us/library/hh850448.aspx#env_vars