Golang test coverage when test files outside package - unit-testing

I am trying to get the proper coverage amount of my code in the processor package using my test files that are in the test directory. I've tried numerous combinations of -cover and -coverpkg and cannot get it to work correctly. Can this be done while having the test files in separate package/folder?
.
├── internal
│ └── processor
└── test

I tracked down an answer and this is how I was able to accomplish my goal:
go test -cover -coverpkg "./internal/processor" "./test"
Based on cmd/go, Testing Flags, the flag usage is:
-coverpkg pattern1,pattern2,pattern3
Apply coverage analysis in each test to packages matching the patterns.
The default is for each test to analyze only the package being tested.
See 'go help packages' for a description of package patterns. Sets -cover.

Related

How to run all Golang benchmark tests including subfolders

I have a Go application with a number of unit and benchmark tests both in the root and in a subfolder called "message".
I execute the following command to run all unit tests from the root including the ones in the messages and any other subfolder:
go test ./...
I want to achieve the same for the benchmark tests, i.e. run them all. The following works for the ones in the root directory:
go test -bench .
The benchmark tests in the /messages folder are ignored which is expected. So I run the following from the root:
go test -bench ./...
That's not recognised at all, Go seems to execute the unit tests that are located in the root dir. I even tried to specify the message folder in the command as follows:
go test -bench ./message
...but it also failed. Currently if I want to run the benchmark tests in the message folder I have to cd into that folder and execute
go test -bench .
like above.
So what's the correct way then? How can I tell Go to find the benchmark tests both in the root and the subfolders? How does the regexp arg work in the case of the -bench flag? Apparently it's different from the regexp for the unit test runner.
You should use ./... to bench all the files from the current working directory and all of its subdirectories. If you wish to get a more verbose output you can use the -v flag. Also it's good to list the memory allocation by using -benchmem.
go test -v ./... -bench=. -run=xxx -benchmem
-bench flag takes regex so to run all benchmarks (-bench .) in all packages: go test -bench=. ./...

Jest: how to get the report of code overage/ components without tests within a specific folder (or maybe project if that is feasible)?

I am writing unit tests for my React JS components in a specific folder using Jest testing framework. All my components are within, let's say, "src/components" folder. I can get the coverage of each test adding "--coverage" flag when I run the command as follow.
jest --watchAll --coverage
It's going to generate the report in the coverage folder and I can view the code coverage of each tests as follow.
That is going to display the code coverage of each component that is included in the tests. But how can I see the list of components that are not yet covered in the tests (maybe the list of components that are not yet covered in the tests within a specific folder). Is it possible to get the stats for that in percentage too? How can I do that? Is it possible?
Add the collectCoverageFrom paths to your extra folders, using the glob pattern. For instance, I have:
collectCoverage: true,
collectCoverageFrom: [
'<rootDir>/**/*.ts',
'!<rootDir>/**/*.interface.ts',
'!<rootDir>/**/*.mock.ts',
'!<rootDir>/**/*.module.ts',
'!<rootDir>/**/__mock__/*',
'!<rootDir>/src/main.ts'
],

Testing generated Go code without co-locating tests

Have some auto-generated golang code for protobuf messages and I'm looking to add some additional testing, without locating the file under the same directory path. This is to allow easy removal of the existing generated code to be sure that if a file is dropped from being generated, it's not left included in the codebase by accident.
The current layout of these files are controlled by prototool so I have something like the following:
/pkg/<other1>
/pkg/<other2>
/pkg/<name-generated>/v1/component_api.pb.go
/pkg/<name-generated>/v1/component_api.pb.gw.go
/pkg/<name-generated>/v1/component_api.pb.validate.go
The *.validate.go comes from envoyproxy/protoc-gen-validate, and *.pb.go & *.pb.gw.go are coming from protobuf and grpc libraries. The other1 and other2 are two helper libraries that we have included along with the generated code to make it easier for client side apps. The server side is in a separate repo and imports as needed.
Because it's useful to be able to delete /pkg/<name> before re-running prototool I've placed some tests of component_api (mostly to exercise the validate rules generated automatically) under the path:
/internal/pkg/<name>/v1/component_api_test.go
While this works for go test -v ./..., it appears not to work to well when generating coverage with -coverpkg.
go test -coverpkg=./... -coverprofile=coverage/go/coverage.out -v ./...
go build <pkgname>/internal/pkg/<name>/v1: no non-test Go files in ....
<output from the tests in /internal/pkg/<name>/v1/component_api_test.go>
....
....
coverage: 10.5% of statements in ./...
ok <pkgname>/internal/pkg/<name>/v1 0.014s coverage: 10.5% of statements in ./...
FAIL <pkgname>/pkg/other1 [build failed]
FAIL <pkgname>/pkg/other2 [build failed]
? <pkgname>/pkg/<name>/v1 [no test files]
FAIL
Coverage tests failed
Generated coverage/go/html/main.html
The reason for use of -coverpkg is that without it there doesn't seem to be anything spotting that any of the code under <pkgname>/pkg/<name>/v1 is covered, and we've see issues with what it reports previously not showing the real level of coverage, which are solved by use of -coverpkg:
go test -cover -coverprofile=coverage/go/coverage.out ./...
ok <pkgname>internal/pkg/<name>/v1 0.007s coverage: [no statements]
ok <pkgname>/pkg/other1 0.005s coverage: 100.0% of statements
ok <pkgname>/pkg/other2 0.177s coverage: 100.0% of statements
? <pkgname>/pkg/<name>/v1 [no test files]
Looking at the resulting coverage/go/coverage.out includes no mention of anything under <pkgname>/pkg/<name>/v1 being exercised.
I'm not attached to the current layout beyond being limited on <pkgname>/pkg/<name>/v1 being automatically managed by prototool and it's rules around naming for the generated files. Would like to ensure the other modules we have can remain exported to be used as helper libraries and I would like to be able to add tests for <pkgname>/pkg/<name>/v1 without needing to locate them in the same directory to allow for easy delete + recreate of generated files, while still getting sensible coverage reports.
I've tried fiddling with the packages passed to -coverpkg and replacing ./... on the command-line and haven't been able to come up with something that works. Perhaps I'm just not familiar with the right invocation?
Other than that is there a different layout that will take care of this for me?
To handle this scenario, simply create a doc.go file in the same directory as the dis-located tests with just the package and comment. This will allow the standard arguments to work and golang appears to be reasonably happy with an empty file.
Once in place the following will work as expected.
go test -coverpkg=./... -coverprofile=coverage/go/coverage.out -v ./...
Idea based on suggestion in https://stackoverflow.com/a/47025370/1597808

How Do I Setup SonarQube cfamil.gcov Correctly?

I cannot get coverage reporting to work within SonarQube. I have a C++ project for which I am using the build-wrapper-linux-x86-64 along with the sonar-scanner. The basic static analysis for the source code seems to work but there is nothing about test code coverage reported within SonarQube.
As part of the same workflow I am using lcov and genhtml to make a unit test coverage report, so I am confident that most of the code coverage steps are being correctly executed. When I manually view the .gcov files I can see run counts in the first column, so there is data there.
I have my code organised into modules. The sonar-project.properties file includes the following:
# List of the module identifiers
sonar.modules=Module1,Module2
# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
# This property is optional if sonar.modules is set.
sonar.sources=./Sources,./Tests
HeliosEmulator.sonar.sources=./Application,./Sources,./Tests
sonar.cfamily.build-wrapper-output=build_output
# Existing reports
sonar.cfamily.build-wrapper-output=build_output
#sonar.cfamily.cppunit.reportsPath=junit
sonar.cfamily.gcov.reportsPath=.
#sonar.cxx.cppcheck.reportPath=cppcheck-result-1.xml
#sonar.cxx.xunit.reportPath=cpputest_*.xml
sonar.junit.reportPaths=junit
I would also like to get the unit test results displayed under the Sonar tools. As I am using the CppUTest framework I do not have an xunit or junit test output at present though. This can be dealt with as a separate issue but as I am unable to found much documentation of how to use the cfamily scanner online I do not know if the tests not being listed is relevant.
I had forgotten to setup my CI system correctly. The .gcov files did not exist for the job that was running the sonar-scanner. They only existed in the testing job that generated the coverage report. No files in the scanner job mean it cannot make a coverage report.
When I set the GitLab CI system I am using to keep the .gcov files as artefacts the coverage reporting suddenly started working.
The .gcov files were generated by a test job and need to be transferred to the sonar-scanner job via the artefact store. This is because GitLab CI does not share a work area between dependent jobs and you have to explicitly say what files must be copied.

automake: automatically run unit tests

I am maintaining an autoconf package and wanted to integrate automatic testing. I use the Boost Unit Test Framework for my unit tests and was able to sucessfully integrate it into the package.
That is it can be compiled via make check, but is is not run (although I read that make check both compiles and runs the tests). As result, I have to run it manually after building the tests which is cumbersome.
Makefile.am in the test folder looks like this:
check_PROGRAMS = prog_test
prog_test_SOURCES = test_main.cpp ../src/class1.cpp class1_test.cpp class2.cpp ../src/class2_test.cpp ../src/class3.cpp ../src/class4.cpp
prog_test_LDADD = $(BOOST_FILESYSTEM_LIB) $(BOOST_SYSTEM_LIB) $(BOOST_UNIT_TEST_FRAMEWORK_LIB)
Makefile.am in the root folder:
SUBDIRS = src test
dist_doc_DATA = README
ACLOCAL_AMFLAGS = ${ACLOCAL_FLAGS} -I m4
Running test/prog yields the output:
Running 4 test cases...
*** No errors detected
(I don't think you need the contents of my test cases in order to answer my question, so I omitted them for now)
So how can I make automake run my tests every time I run make check?
At least one way of doing this involves setting TESTS variable. Here's what documentation on automake says about it:
If the special variable TESTS is defined, its value is taken to be a list of programs or scripts to run in order to do the testing.
So adding the line
TESTS = $(check_PROGRAMS)
should instruct it to run the tests on make check.