Testing generated Go code without co-locating tests - unit-testing

Have some auto-generated golang code for protobuf messages and I'm looking to add some additional testing, without locating the file under the same directory path. This is to allow easy removal of the existing generated code to be sure that if a file is dropped from being generated, it's not left included in the codebase by accident.
The current layout of these files are controlled by prototool so I have something like the following:
/pkg/<other1>
/pkg/<other2>
/pkg/<name-generated>/v1/component_api.pb.go
/pkg/<name-generated>/v1/component_api.pb.gw.go
/pkg/<name-generated>/v1/component_api.pb.validate.go
The *.validate.go comes from envoyproxy/protoc-gen-validate, and *.pb.go & *.pb.gw.go are coming from protobuf and grpc libraries. The other1 and other2 are two helper libraries that we have included along with the generated code to make it easier for client side apps. The server side is in a separate repo and imports as needed.
Because it's useful to be able to delete /pkg/<name> before re-running prototool I've placed some tests of component_api (mostly to exercise the validate rules generated automatically) under the path:
/internal/pkg/<name>/v1/component_api_test.go
While this works for go test -v ./..., it appears not to work to well when generating coverage with -coverpkg.
go test -coverpkg=./... -coverprofile=coverage/go/coverage.out -v ./...
go build <pkgname>/internal/pkg/<name>/v1: no non-test Go files in ....
<output from the tests in /internal/pkg/<name>/v1/component_api_test.go>
....
....
coverage: 10.5% of statements in ./...
ok <pkgname>/internal/pkg/<name>/v1 0.014s coverage: 10.5% of statements in ./...
FAIL <pkgname>/pkg/other1 [build failed]
FAIL <pkgname>/pkg/other2 [build failed]
? <pkgname>/pkg/<name>/v1 [no test files]
FAIL
Coverage tests failed
Generated coverage/go/html/main.html
The reason for use of -coverpkg is that without it there doesn't seem to be anything spotting that any of the code under <pkgname>/pkg/<name>/v1 is covered, and we've see issues with what it reports previously not showing the real level of coverage, which are solved by use of -coverpkg:
go test -cover -coverprofile=coverage/go/coverage.out ./...
ok <pkgname>internal/pkg/<name>/v1 0.007s coverage: [no statements]
ok <pkgname>/pkg/other1 0.005s coverage: 100.0% of statements
ok <pkgname>/pkg/other2 0.177s coverage: 100.0% of statements
? <pkgname>/pkg/<name>/v1 [no test files]
Looking at the resulting coverage/go/coverage.out includes no mention of anything under <pkgname>/pkg/<name>/v1 being exercised.
I'm not attached to the current layout beyond being limited on <pkgname>/pkg/<name>/v1 being automatically managed by prototool and it's rules around naming for the generated files. Would like to ensure the other modules we have can remain exported to be used as helper libraries and I would like to be able to add tests for <pkgname>/pkg/<name>/v1 without needing to locate them in the same directory to allow for easy delete + recreate of generated files, while still getting sensible coverage reports.
I've tried fiddling with the packages passed to -coverpkg and replacing ./... on the command-line and haven't been able to come up with something that works. Perhaps I'm just not familiar with the right invocation?
Other than that is there a different layout that will take care of this for me?

To handle this scenario, simply create a doc.go file in the same directory as the dis-located tests with just the package and comment. This will allow the standard arguments to work and golang appears to be reasonably happy with an empty file.
Once in place the following will work as expected.
go test -coverpkg=./... -coverprofile=coverage/go/coverage.out -v ./...
Idea based on suggestion in https://stackoverflow.com/a/47025370/1597808

Related

How to determine the tree of files which are imported during a test case?

When I run a test in Go, is there any way for me to get the list of files that the code imports, directly or indirectly? For example, this could help me rule out changes from certain parts of the codebase when debugging a failing test.
Alternatively, with Git, can we find out what the lowest common ancestor git tree node is for the files exercised in a given test?
Context: I'm looking into automated flakiness detection for my test suite, and I want to be able to know the dependency tree for every test so that I can detect flaky tests better.
For example, if TestX fails for version x of the code, and later on some files in the same codebase which are not used at all by TestX are changed, and then TestX passes, I want to be able to detect that this is a flaky test, even though the overall codebase that the test suite ran on has changed.
You are probably looking for go list -test -deps [packages].
For an explanation of what the flags do, you can check Go command List packages or modules:
-deps:
The -deps flag causes list to iterate over not just the named packages but also all their dependencies. It visits them in a depth-first post-order traversal, so that a package is listed only after all its dependencies. [...]
-test:
The -test flag causes list to report not only the named packages but also their test binaries (for packages with tests), to convey to source code analysis tools exactly how test binaries are constructed. The reported import path for a test binary is the import path of the package followed by a ".test" suffix, as in "math/rand.test". [...]
Maybe I'll state the obvious, but remember that list works on packages, not single files, so the command above will include dependencies of the non-test sources (which should be what you want anyway).

Show code coverage with a source code in Jenkins wiht Cobertura (run result from other machine)

Background
I have large c++ application with complex directory structure. Structure is so deep that code repository can't be stored in Jenkins workspace, but is some root directory, otherwise build fails since path length limit is busted.
Now since application is tested in different environments, test application is run in diffrent machine. Application and all resources are compressed and copied to test machine where tests are run using OpenCppCoverage and as a result Cobertura xml is produced.
Now since source code is needed to show covarage result xml is copied back to build machine and then feed to Jenkins Cobertura plugin.
Problem
Coverage reports shows only percent results for module or source code. Code content is not show, but this error message is show:
Source
Source code is unavailable. Some possible reasons are:
This is not the most recent build (to save on disk space, this plugin only keeps the most recent build’s source code).
Cobertura found the source code but did not provide enough information to locate the source code.
Cobertura could not find the source code, so this plugin has no hope of finding it.
You do not have sufficient permissions to view this file.
Now I've found this SO answear which is promising:
The output xml file has to be in the same folder as where coverage
is run, so:
coverage xml -o coverage.xml
The reference to the source folder is put into coverage.xml and if
the output file is put into another folder, the reference to the
source folder will be incorrect.
Problem is that:
I've run tests on different machine (this can be overcome by script which modifies paths in xml).
my source code can't be inside a workspace during a build time
placing xml in respective directory of source code is not accepted by Cobertura plugin. It ends with this error:
[Cobertura] Publishing Cobertura coverage report...
FATAL: Unable to find coverage results
java.io.IOException: Expecting Ant GLOB pattern, but saw 'C:/build_coverage/Products/MyMagicProduct/Src/test/*Coverage.xml'. See http://ant.apache.org/manual/Types/fileset.html for syntax
This is part of xml result (before modifications):
<?xml version="1.0" encoding="utf-8"?>
<coverage line-rate="0.63669186741173223" branch-rate="0" complexity="0" branches-covered="0" branches-valid="0" timestamp="0" lines-covered="122029" lines-valid="191661" version="0">
<sources>
<source>c:</source>
<source>C:</source>
</sources>
<packages>
<package name="C:\jenkins\workspace\MMP_coverage\MyMagicProduct\src\x64\Debug\MMPServer.exe" line-rate="0.63040511358728513" branch-rate="0" complexity="0">
<classes>
<class name="AuditHandler.cpp" filename="build_coverage\Products\MyMagicProduct\Src\Common\AuditHandler.cpp" line-rate="0.92682926829268297" branch-rate="0" complexity="0">
<methods/>
<lines>
<line number="18" hits="1"/>
<line number="19" hits="1"/>
<line number="23" hits="1"/>
<line number="25" hits="1"/>
<line number="27" hits="1"/>
....
</lines>
</class>
....
The biggest issue is that I'm not sure if location of xml is actual a problem since plugin doesn't report details of the issues encountered when trying to fetch/find respective source code. Second bullet from Cobertura which may explain problem is totally confusing:
Cobertura found the source code but did not provide enough information to locate the source code.
What else I've tried
I've ensured that anyone can read source code (to avoid problem with access)
I've modified xml so filename contains path relative to: jenkins workspace, path where xml file with coverity report is located
copied my source code to various locations, even containing "cobertura" directory since something like this I've found in plugin source code
I've tried understand the issue by inspecting source code.
I've found some (a bit old) github project which maybe a hint howto fix it - currently I'm trying to understudy what it exactly does (I don't want to import this project to my build structure).
So far no luck.
Update:
Suddenly (I'm not sure what I have done) it works for my account. Problem is that it works only for me all other users have same issue. This clearly indicate that issue must be a security.
I encountered a very similar issue when I had to develop a CI pipeline for a very huge C++ client. I had the best results if I avoided the Cobertura Plugin and instead used the HTML Publisher Plugin. The main issue I had was also finding the source files.
Convert OpenCppCoverage result to HTML
This step is quite easy. You have to add the parameter --export_type=html:<outputPath> (see Commandline-reference) to the OpenCppCoverage call.
mkdir CodeCoverage
OpenCppCoverage.exe --export_type=html:CodeCoverage <GoogleTest.exe>
The commands above should result in a html-file in the directory <jenkins_workspace>/CodeCoverage/index.html
Publish the OpenCppCoverage result
To do this we use the HTML Publisher Plugin as I mentioned above. reportDir is the directory created in step one and which contains our html-file. Its path is relative to the Jenkins workspace.
publishHTML target: [
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'CodeCoverage',
reportFiles: 'index.html',
reportName: 'Code Coverage'
]
and to be sure that everyone can download and check the result locally we archieve the result of OpenCppCoverage:
archiveArtifacts artifacts: 'CodeCoverage/*.*'
You can see the result now in the sidebar of your pipeline under Code Coverage and the result will look like the following:
This is the solution that worked for me.
I hope this helps at least a bit. I can only advice do avoid the Cobertura Plugin. I wasted so much time try to fix it and recognize my sources...
Ok I've found reasons why I had a problems with this plugin.
xml from openCppCoverage is just correct. No changes are needed here to make it work (as far as sources are there where pdb file points to). Sources outside Jenkins workspace are not the problem here. When I copied executable from build machine to test machine, then run tests with openCppCoverage and copied result back to build machine it is just fine.
In job configuration any user which supposed to view code coverage has to have access to Job/workspace in security section. In my case I've enabled this for all logged in users. This covers last bullet point of error message.
Most important thing: build must be successful. I mean form beginning to the end. Doesn't meter if step containing call to cobertura plugin was successful. If any step (even in the future step) fails then cobertura will not show code for this coverage run. In my case build job was failing since one of tests was timing out. This was caused by openCppCoverage overhead which slows down tests by factor 3. My script was detecting timeout and killing one of tests.
I discovered that not successful build was a problem by accident. During experiments I noticed two cases when cobertura has shown source code:
I've rerun job and removed all steps but one responsible for publishing coloratura results
I run whole job such way it run a single test case which passed
Not sowing coverage if build is not successful is reasonable (if test failed then most probably wrong branch of code has been taken), but UI should indicate that in different way.
Conclusion
This is great example how it is important to report errors to user with precise details what went wrong and why. I wasted at least whole weak to figure out what is actually wrong which bullet point of error message is actually my case. In fact error message from plugin doesn't cover all reasons of not showing the code.
I will file report that plugin should give better explanation what went wrong.

How Do I Setup SonarQube cfamil.gcov Correctly?

I cannot get coverage reporting to work within SonarQube. I have a C++ project for which I am using the build-wrapper-linux-x86-64 along with the sonar-scanner. The basic static analysis for the source code seems to work but there is nothing about test code coverage reported within SonarQube.
As part of the same workflow I am using lcov and genhtml to make a unit test coverage report, so I am confident that most of the code coverage steps are being correctly executed. When I manually view the .gcov files I can see run counts in the first column, so there is data there.
I have my code organised into modules. The sonar-project.properties file includes the following:
# List of the module identifiers
sonar.modules=Module1,Module2
# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
# This property is optional if sonar.modules is set.
sonar.sources=./Sources,./Tests
HeliosEmulator.sonar.sources=./Application,./Sources,./Tests
sonar.cfamily.build-wrapper-output=build_output
# Existing reports
sonar.cfamily.build-wrapper-output=build_output
#sonar.cfamily.cppunit.reportsPath=junit
sonar.cfamily.gcov.reportsPath=.
#sonar.cxx.cppcheck.reportPath=cppcheck-result-1.xml
#sonar.cxx.xunit.reportPath=cpputest_*.xml
sonar.junit.reportPaths=junit
I would also like to get the unit test results displayed under the Sonar tools. As I am using the CppUTest framework I do not have an xunit or junit test output at present though. This can be dealt with as a separate issue but as I am unable to found much documentation of how to use the cfamily scanner online I do not know if the tests not being listed is relevant.
I had forgotten to setup my CI system correctly. The .gcov files did not exist for the job that was running the sonar-scanner. They only existed in the testing job that generated the coverage report. No files in the scanner job mean it cannot make a coverage report.
When I set the GitLab CI system I am using to keep the .gcov files as artefacts the coverage reporting suddenly started working.
The .gcov files were generated by a test job and need to be transferred to the sonar-scanner job via the artefact store. This is because GitLab CI does not share a work area between dependent jobs and you have to explicitly say what files must be copied.

Publishing unit test results from TFS2013 Build to SonarQube

I have created a TFS2013 Build Definition using the template TfvcTemplate.12.xaml
I have specified a test run using VSTestRunner and enabled code coverage.
I am integrating this build with sonar analysis by specifying pre-build and post-test execution script.
Prebuild script arguments: begin /name:PrjName /key:PrjKey /version:1.0 /d:sonar.cs.vstest.reportsPaths="tst*.trx"
I have the "Unit Test Coverage" widget on my sonar dashboard.
It shows Unit Test Coverage %
However, it does not show the unit tests (ie how many tests were run, how many failed ,etc).
I looked in the build output. There is a "tst" folder, however it is empty.
I cannot find the trx files.
I believe that either the trx files are not properly generated or
I am not setting the "sonar.cs.vstest.reportsPaths" correctly.
Please help !!
Relative paths are not well supported: Specify an absolute path wildcard to your *.trx reports. See https://jira.sonarsource.com/browse/SONARMSBRU-100 for details on the bug.
Note that you probably can use the TFS 2013 environment variables to construct this absolute path wildcard: https://msdn.microsoft.com/en-us/library/hh850448.aspx#env_vars

Which HWUT files to put under version control

I am using HWUT for unit testing and want to put my tests under version control. Adding the test code and the GOOD folder is obvious. But what about other files e.g. the ADM folder?
NEEDED
GOOD/*:
hwut-info.dat: If you specify it.
Makefile: If you specifiy it.
Your test scripts and source files that implement the test.
ADM/cache.fly: Optional; Only check-in if queries on past tests are to be accomplished without doing the tests.
NOT TO BE CHECKED-IN
OUT/*
Any result produced by 'make'
Any temporary log files
Note, SCMs usually have a 'prop:ignore' property, or an 'ignore' file. You may adapt this according to the information above.