How to get coverage for multiple files in GOLANG - unit-testing

I have a package in GO-LANG that has several files implementing it. (all files are in the same directory)
file1: mypackage.go
package mypackage
func f1 () {}
file2: mypackage_addition.go
package mypackage
func f2 () {}
file3: mypackage_test.go
package mypackage
import "testing"
func TestF1 (t *testing.T) {
f1()
}
file4: mypackageAddition_test.go
package mypackage
import "testing"
func TestF2 (t *testing.T) {
f2()
}
I do this in order to get coverage:
mypackage> $ tree
.
├── mypackage.go
├── mypackageAddition_test.go
├── mypackageAdditions.go
└── mypackage_test.go
0 directories, 4 files
mypackage> $ go test -v -coverprofile cover.out ./...
=== RUN TestF2
--- PASS: TestF2 (0.00s)
=== RUN TestF1
--- PASS: TestF1 (0.00s)
PASS
coverage: 0.0% of statements
ok github.com/MyDevelopment/mypackage 0.701s coverage: 0.0% of statements
mypackage> $ go tool cover -html=cover.out -o cover.html
mypackage> $ open cover.html
When I open the html, I only get coverage for f1().
f2 is called (I verified it in debug), and the run of f2 is represented in the text, but not in the html file.
Any help is appreciated.

Just reiterating what is in my comment
So after testing this I thought the same thing but it is looking like when I hit the dropdown and switch to mypackage_adding.go, f2() is covered. It is just in a different file. Just change the file in the dropdown in the HTML page.

Related

How to run database setup only once from multiple Go packages?

I'm trying to create some objects in my database, so that my tests can have some data to work with. I've put my setup logic into a package testsetup. However, I've discovered that go test runs each package as a totally separate instance, so that even though I'm using sync.Once in my testsetup package, Setup still runs multiple times because each package's tests run as a separate Go instance. I really want to keep running my tests in parallel because it's a lot faster, so I'm not currently considering turning off parallelization. Is there a clean way I can do this?
I'm even starting to consider dirty hacks at this point, like using a shell script to implement os-level synchronization.
Here's my package structure:
testsetup
testsetup.go
package1
package1.go
package1_test.go
package2
package2.go
package2_test.go
And here's a simplified version of my testsetup function:
var onceSetup sync.Once
var data model.MockData
func Setup() model.MockData {
onceSetup.Do(createData)
return data
}
func createData() {
// Do some SQL calls to create the objects. We only want to do this once.
data = model.Data{
Object1: ...,
Object2: ...,
}
}
It can be done but it may not be worth it, you'll have to decide that for yourself.
You'll need a package that implements a "test registry" and a "test runner", and another package that is the "entrypoint" that ties it all together and starts the runner.
The resulting structure could look something like this:
../module
├── app
│   ├── pkg1
│   │   ├── foo.go
│   │   ├── ...
│   │   └── tests
│   │   ├── test_foo.go
│   │   ├── ...
│   │   └── pkg1_test.go
│   └── pkg2
│   ├── ...
│   ├── bar.go
│   └── tests
│   ├── ...
│   ├── test_bar.go
│   └── pkg2_test.go
├── go.mod
├── internal
│   └── testutil
│   ├── registry.go # the test registry
│   └── runner.go # the test runner
└── tests
└── start_test.go # the test entrypoint
First, let's consider what the entrypoint will look like once this is done. It may be that you don't like what you see, in that case you should probably ignore the rest of the answer.
File module/tests/start_test.go:
package tests
import (
"testing"
// Use the blank identifier for "side-effect-only" imports
_ "module/app/pkg1/tests"
_ "module/app/pkg2/tests"
// ...
"module/internal/testutil"
)
func Test(t *testing.T) {
testutil.TestAll(t)
}
Next, the registry in module/internal/testutil/registry.go:
package testutil
import (
"path/filepath"
"runtime"
"testing"
)
// v: the directory of a package
// v: the files in a directory
// v: the tests in a file
var tests = make(map[string][][]func(*testing.T))
func Register(ft ...func(*testing.T)) int {
// Use the directory of the Caller's file
// to map the tests. Why this can be useful
// will be shown later.
_, f, _, _ := runtime.Caller(1)
dir := filepath.Dir(f)
tests[dir] = append(tests[dir], ft)
// This is not necessary, but a function with a return
// can be used in a top-level variable declaration which
// can be used to avoid unnecessary init() functions.
return 0
}
The runner in module/internal/testutil/runner.go:
package testutil
import (
"testing"
)
func TestAll(t *testing.T) {
// TODO setup ...
defer func() {
// TODO teardown ...
}()
// run
for _, dir := range tests {
for _, file := range dir {
for _, test := range file {
test(t)
}
}
}
}
Now the individual packages, e.g. module/app/pkg1/tests/test_foo.go:
package tests
import (
"testing"
"module/internal/testutil"
)
var _ = testutil.Register(
TestFoo1,
TestFoo2,
)
func TestFoo1(t *testing.T) {
// ...
}
func TestFoo2(t *testing.T) {
// ...
}
That's it, you can now go to the module/tests "entrypoint" and run:
go test
ADDENDUM #1
If you want to retain the ability to test the individual packages separately
then that can be integrated as well.
First, add a new function to the runner in module/internal/testutil/runner.go:
package testutil
import (
// ...
"path/filepath"
"runtime"
)
// ...
func TestPkg(t *testing.T) {
// Now the directory of the Caller's file
// comes in handy. We can use it to make
// sure no other tests but the caller's
// will get executed.
_, f, _, _ := runtime.Caller(1)
dir := filepath.Dir(f)
// TODO setup ...
defer func() {
// TODO teardown ...
}()
// run
for _, file := range tests[dir] {
for _, test := range file {
test(t)
}
}
}
And in the individual test package add a single test file, e.g. module/app/pkg1/tests/pkg1_test.go:
package tests
import (
"testing"
"module/internal/testutil"
)
func Test(t *testing.T) {
testutil.TestPkg(t)
}
That's it, now you can cd into module/app/pkg1/tests and run:
go test
ADDENDUM #2
Now, with the individual packages having their own _test.go file, you are back to square one if you want to use go test module/... to execute all the tests in the module, since that would not only run the entrypoint but also cause the individual test packages to be executed individually.
You can work around that problem with a simple environment variable however. Just a small adjustment to the testutil.TestPkg function:
package testutil
import (
// ...
"os"
)
// ...
func TestPkg(t *testing.T) {
if os.Getenv("skippkg") == "yes" {
return
}
// ...
}
And now...
# ... the following will work as you'd expect
skippkg=yes go test module/...
go test module/tests
go test module/app/pkg1/tests
Is there some sort of blocking mechanism in your testsetup? I would think that each package would run its tests in parallel still and run what they need from testsetup in parallel. Otherwise you could make it like this:
testsetup
testsetup.go
packages_test.go
package1
package1.go
package2
package2.go
And then in testpackage/packages_test.go, is where you run your tests, importing the code in package1 and package2
It could look something like this:
package testpackage
import (
p1 "project/root/package1"
p2 "project/root/package2"
)
func TestPackages(t *testing.T) {
setup := Setup()
t.Parallel()
t.Run("Package1Test", func(t *testing.T) { package1Test(t, setup) })
t.Run("Package2Test", func(t *testing.T) { package2Test(t, setup) })
}
func package1Test(t *testing.T, d model.MockData) {
err := p1.RunYourFunc(d.data)
require.NoError(t, err)
}
func package2Test(t *testing.T, d model.MockData) {
err := p2.OtherFunc(d.data)
require.NoError(t, err)
}

Please explain how we're supposed to test Julia libraries and why one of two breaks

In my Advent of Code repository I've had a utility library since last year and have been using stuff from that also this year.
This year I wanted to add a second one for loading the input files quicker. For some reason unittests and using it works for the old library but not for the second.
I tried to unify the two folders as much as possible until the Project.toml for instance are equal now.
The two directories look like this (ProblemParser failing and Utils working):
ProblemParser ⛔
├── Manifest.toml
├── Project.toml
├── src
│ └── ProblemParser.jl
└── test
├── Manifest.toml
├── Project.toml
└── runtests.jl
Utils ✅
├── Manifest.toml
├── Project.toml
├── src
│ └── Utils.jl
└── test
├── Manifest.toml
├── Project.toml
└── runtests.jl
Adding them to the Project (Manifest) works fine (other stuff left out):
(AoC 2021) pkg> status
Status `~/src/me/AoC/21/Project.toml`
[16064a1e] ProblemParser v0.1.0 `../ProblemParser`
[c4255648] Utils v0.1.0 `../Utils`
However trying to use ProblemParser doesn't go so well.
julia> using Utils
julia> # that worked
julia> using ProblemParser
ERROR: KeyError: key ProblemParser [16064a1e-6b5f-4a50-97c7-fe66cda9553b] not found
Stacktrace:
[1] getindex
# ./dict.jl:481 [inlined]
[2] root_module
# ./loading.jl:1056 [inlined]
[3] require(uuidkey::Base.PkgId)
# Base ./loading.jl:1022
[4] require(into::Module, mod::Symbol)
# Base ./loading.jl:997
The same yes/no happens when trying to run the tests.
(AoC 2021) pkg> activate ../Utils/
Activating project at `~/src/me/AoC/Utils`
(Utils) pkg> test
Testing Utils
Status `/tmp/jl_AGawpC/Project.toml`
[c4255648] Utils v0.1.0 `~/src/me/AoC/Utils`
[8dfed614] Test `#stdlib/Test`
Status `/tmp/jl_AGawpC/Manifest.toml`
[79e6a3ab] Adapt v3.3.1
----- 8< snipped 8< -----
[4536629a] OpenBLAS_jll `#stdlib/OpenBLAS_jll`
[8e850b90] libblastrampoline_jll `#stdlib/libblastrampoline_jll`
Testing Running tests...
Test Summary: | Pass Total
#something_nothing | 15 15
Testing Utils tests passed
(Utils) pkg> activate ../ProblemParser/
Activating project at `~/src/me/AoC/ProblemParser`
(ProblemParser) pkg> test
Testing ProblemParser
Status `/tmp/jl_6v5Y3D/Project.toml`
[16064a1e] ProblemParser v0.1.0 `~/src/me/AoC/ProblemParser`
[8dfed614] Test `#stdlib/Test`
Status `/tmp/jl_6v5Y3D/Manifest.toml`
[16064a1e] ProblemParser v0.1.0 `~/src/me/AoC/ProblemParser`
[2a0f44e3] Base64 `#stdlib/Base64`
----- 8< snipped 8< -----
[9e88b42a] Serialization `#stdlib/Serialization`
[8dfed614] Test `#stdlib/Test`
Testing Running tests...
ERROR: LoadError: ArgumentError: Package ProjectParser not found in current path:
- Run `import Pkg; Pkg.add("ProjectParser")` to install the ProjectParser package.
Stacktrace:
[1] require(into::Module, mod::Symbol)
# Base ./loading.jl:967
[2] include(fname::String)
# Base.MainInclude ./client.jl:451
[3] top-level scope
# none:6
in expression starting at /home/tsbr/src/me/AoC/ProblemParser/test/runtests.jl:1
ERROR: Package ProblemParser errored during testing
What is the difference between the two? What makes one work and the other not?
I just don't see it.
Ah, you have the module name defined wrong in src/ProblemParser.jl - the first line is module ProjectParser instead of module ProblemParser.

Unit tests using Automake

I am working in a project with other people in the team using GNU autotools. In the project we are using unit test for each non trivial C++ class. I found out that there is support for unit testing. For that I am using this structure:
./
+ tests/
+ Makefile.am
+ classA_test.cc
....
+ classB_test.cc
+ src/
+ lib/
+ Makefile.am
The problem comes since my main Makefile.am is using subdir-objects options --note that I am not using recursive makefile for the source files--, I cannot export my variables --such as, AM_CPPFLAGS-- to the other Makefile. So far I made it work using:
$ make check
but I keep getting problems for the paths and the options when I do
$ make distcheck
So my questions is, how is the standard way to deal with unit tests?
EDIT:
I made it work as long as I remove the subdir-objects from the tests/Makefile.am. Now it throw some warnings but it compiles. Still it seems not to be an appropriate way to deal with unit tests
After some research I came up with the appropiate way to deal with Unit tests and Automake:
Following the previous scheme:
./
+ tests/
+ Makefile.am
+ classA_test.cc
....
+ classB_test.cc
+ src/
+ lib/
+ Makefile.am
The makefile.am in the root will be the main one, this one calls the makefile in the tests directory
$ cat Makefile.am
SUBDIRS = . tests # (Super Important) note the "." before tests,
# it means it will be executed first
....
$ cat test/Makefile.am
AM_CXXFLAGS = ...
AM_LDFLAGS = -L #top_srcdir#/lib #If needed
LDADD = -llibraryfortests #If needed
TESTS = test1 .. testN
test1_SOURCES = test1.cc ../src/somewhere/classtotest.cc
testN_SOURCES = ...
$ cat configure.ac
AM_INIT_AUTOMAKE([subdir-objects])
AC_CONFIG_FILES([Makefile])
AC_CONFIG_FILES([tests/Makefile])
...
Now if you want to run the tests
$ sh ../pathto/configure
$ make check
As well dist[check] should work
$ make distcheck
...
make[3]: Entering directory `/home/vicente/test/tests'
PASS: settings
============================================================================
Testsuite summary for Pepinos 00.13.15
============================================================================
# TOTAL: 1
# PASS: 1
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
make[3]: Leaving directory `/home/vicente/test/tests'
...
So to answer the other question?
Q. I cannot export my variables --such as, AM_CPPFLAGS-- to the other Makefile.
A. True, but I can always declare a variable in the configure.ac and AC_SUBT to make it visible to other Makefile.am
Sources: https://stackoverflow.com/a/29255889/2420872

cabal: how to stop the build on test failure?

i created tests in HUnit (Tests.hs). i connected them to main: main = runTestTT tests. when i do runhaskell Tests i see
### Failure in: 0
T(1)
expected: 145
but got: 45
Cases: 10 Tried: 10 Errors: 0 Failures: 1
Counts {cases = 10, tried = 10, errors = 0, failures = 1}
which is expected. in cabal file i did
test-suite xxx
type: exitcode-stdio-1.0
main-is: Tests.hs
build-depends: base ==4.5.*, HUnit ==1.2.5.2, containers == 0.5.5.1
and when i do cabal test same test logs are written to a file - so i'm sure tests are executed and failing (as expected) but in console i see:
1 of 1 test suites (1 of 1 test cases) passed.
and the exit code is 0.
so my question is: why cabal claims tests passed and how to make it report errors correctly?
I just had to figure this out myself, this is what I finally got to work....
module Main where
import Data.Monoid
import Test.Framework
import Test.Framework.Providers.HUnit
import Test.HUnit
firstTest::Assertion --This one passes
firstTest = do
assertEqual "reward state root doesn't match" (1::Int) 1
secondTest::IO () --This one fails (note, Assertion is just "IO()", so you can use either)
secondTest = do
assertEqual "empty db didn't match" (1::Int) 2
main::IO ()
main =
defaultMainWithOpts
[
testCase "ShortcutNodeData Insert" firstTest,
testCase "FullNodeData Insert" secondTest
] mempty
In my .cabal file
Test-Suite test-program
type: exitcode-stdio-1.0
main-is: Main.hs
hs-source-dirs: test
build-depends: base
, test-framework
, test-framework-hunit
, HUnit
, containers
then run with cabal test
It is tricky, because cabal looks at the exitcode (see the type above), but the HUnit outputs its own messages.... So if you don't return the correct value, you can see output like "test failed" followed by "test passed". Obviously, the solution is to use the builtin defaultMainWithOpts, which does everything correctly.

Package visibility in Go Unit Tests

Given the following code file (named server.go) in Go:
package glimpse
func SplitHeader() string {
return "hi there"
}
and the accompanying test file (server_test.go):
package glimpse
import (
"testing"
)
func TestSplitHeader(t *testing.T) {
answer := SplitHeader()
if answer == "" {
t.Error("No return value")
}
}
Why is it the following command:
go test server_test.go
returns
# command-line-arguments
./server_test.go:9: undefined: SplitHeader
I'm certainly missing something catastrophically obvious.
Use only
$ go test
from within the package directory to perform testing. If you name specific files as an argument to go test, then only those file will be considered for the build of the test binary. That explains the 'undefined' error.
As an alternative, use "import path" as an argument to go test instead, for example
$ go test foo.com/glimpse