When go test is ran it runs your files ending in _test.go by running the functions that start in the format TestXxx and use the (*t testing.T) module. I was wondering if each function in the _test.go file were ran concurrently or if it definitively ran each function separately? Does it create a go routine for each one? If it does create a go routine for each one, can I monitor the go routines in some way? Is it ever possible to do something like golibrary.GoRoutines() and get an instance for each one and monitor them some how or something like that?
Note: this question assumes you using the testing framework that comes with go (testing).
Yes, tests are executed as goroutines and, thus, executed concurrently.
However, tests do not run in parallel by default as pointed out by #jacobsa. To enable parallel execution you would have to call t.Parallel() in your test case and set GOMAXPROCS appropriately or supply -parallel N (which is set to GOMAXPROCS by default).
The simplest solution for your case when running tests in parallel would be to have a global slice for port numbers and a global atomically incremented index that serves as association between test and port from the slice. That way you control the port numbers and have one port for each test. Example:
import "sync/atomic"
var ports [...]uint64 = {10, 5, 55}
var portIndex uint32
func nextPort() uint32 {
return atomic.AddUint32(&portIndex, 1)
}
Yes, as other answers already pointed, tests inside one _test.go file are running in parallel only if you add t.Parallel() and run with -parallel tag.
BUT - by default the command go test runs in parallel tests for different packages, and default is the number of CPUs available (see go help build)
So to disable parallel execution you should run tests like this
go test -p 1 ./...
Yes if you add this :
t.Parallel() to your functions.
like this:
func TestMyFunction(t *testing.T) {
t.Parallel()
//your test code here
}
There are two aspects of "parallel tests" in go.
Testing of packages (folders) in parallel. Controlled by -p flag.
go help build:
-p n
the number of programs, such as build commands or
test binaries, that can be run in parallel.
The default is GOMAXPROCS, normally the number of CPUs available.
Testing of tests within a package. Controlled by -parallel flag, and tests need this enabled with t.Parallel().
go help testflag:
-parallel n
Allow parallel execution of test functions that call t.Parallel.
The value of this flag is the maximum number of tests to run
simultaneously; by default, it is set to the value of GOMAXPROCS.
Note that -parallel only applies within a single test binary.
The 'go test' command may run tests for different packages
in parallel as well, according to the setting of the -p flag
(see 'go help build').
For #1, you can observe this directly by having tests in multiple packages, and forcing them to be long running with a sleep. Then ps aux and grep for go running and you'll see separate processes in parallel.
Or run with -x detail flag. That pretty clearly shows what’s going on. The tests are launched once per package (e.g. given have *_test.go files in several package. Note I’m running go test -x ./... -v -count=1 |& ruby -pe 'print Time.now.strftime("[%Y-%m-%d %H:%M:%S] ")' |& tee temp.out with various configurations of -p=N -parallel=M, you can see in parallel them running:
[2022-05-09 17:22:26] $WORK/b065/understandinit.test -test.paniconexit0 -test.timeout=10m0s -test.parallel=1 -test.v=true -test.count=1
[2022-05-09 17:22:26] $WORK/b001/main.test -test.paniconexit0 -test.timeout=10m0s -test.parallel=1 -test.v=true -test.count=1
[2022-05-09 17:22:26] $WORK/b062/mainsub.test -test.paniconexit0 -test.timeout=10m0s -test.parallel=1 -test.v=true -test.count=1
You can run them concurrently by flagging the test with t.Parallel
and then run the test using the -parallel flag.
You can see other testing flags here
Related
Summary of the problem
I've setup a repository with the checks I developed, designed to be used exclusively in an another repository.
The problem I've been facing is that when I run pre-commit run -a -v the checks don't go through every file, and time to time they even change!
A little more details
I've run the [identity][1] check and it prints every file in the repo, meaning the files are read by pre-commit (per my understanding)
When I execute pre-commit run the file in staging are correctly parsed
Why do i think that the checks don't run on every file?
I always print something to the standard output for every check, and when I run it verbosely it only prints 5 to 7 lines, meaning it checked only those.
If I try and edit a file (breaking a check) whose not on the short list, the check still passes through
Some code
The structure of the folder containing the files to be checked
./
articles/
some_name/
file.md
file.png
and_so_on.jpeg
team/
some_other_name/
file.md
propic.jpg
Summary of the .pre-commit-hooks.yaml
- id: team-check-name
name: blabla
description: blabla
entry: team-checkname
files: 'team/.*/'
types: [markdown]
language: python
- id: article-check-name
name: blabla
description: blabla
entry: article-checkname
files: 'articles/.*/'
types: [markdown]
language: python
Essentially the checks supposed to run on articles/.*/ start with article- and they all have this property: files: 'articles/.*/'.
Similarly every check supposed to run on team/.*/ starts with team- and they all have files: 'team/.*/'
Summary of the .pre-commit-config.yaml
repos:
- repo: https://github.com/repourl
rev: commit_hash
hooks:
- id: team-check-name
- id: article-check-name
But as you can see, I don't override any settings so it shouldn't interfere
the hook tools you've written are incorrect -- they only process sys.argv[1] rather than positional arguments
your code uses a lot of global variables so it's not a straightforward refactor -- typically you'd either use argparse to collect nargs='*' or loop over sys.argv[1:] (if you don't have any options)
as to why this is the convention -- it's very wasteful to start a linter process over and over to lint a single file (often times executable startup cost dwarfs the actual linting / formatting process)
disclaimer: I wrote pre-commit
I'm working on a TCL project, for which version control is based on Git. So, in order to produce good-quality code, I set up put execution of the tests in pre-commit hook.
However, even if they are executed (trace is shown in command-line), and one of the tests is failed, Git performs the commit. So I launched the hook manually to check the error code, and I figured out that it is null, explaining why Git does not stop:
$ .git/hooks/pre-commit
++++ FlattenResult-test PASSED
(...)
==== CheckF69F70 FAILED
==== Content of test case:
(...)
==== CheckF69F70 FAILED
$ echo $?
0
(Launching the tests script with tclsh also results in $? to be 0.)
So my question is about this last line: why is $? equal to 0, when one of the tcl tests is failed? And how can I achieve a simple pre-commit hook that stops on failure?
I read and reread the tcltest documentation, but saw no setting or information about this error code. And I would really like not to have to parse the tcl tests output, to check if ERROR or FAILED is present...
Edit: versions
TCL version : 8.5
tcltest version: 2.3.4
This depends on how you run your test suite. Normally you run a file called tests/all.tcl which may look something like this:
package require Tcl 8.6
package require tcltest 2.5
namespace import tcltest::*
configure -testdir [file dirname [file normalize [info script]]] {*}$argv
runAllTests
That final runAllTests returns a boolean indicating success (0) or failure (1). You can use that to generate an exit code by changing the last line to:
exit [runAllTests]
I use this redefinition in some of my test scripts:
# Exit non-zero if any tests fail.
# tcltest's `cleanupTests` resets the numTests array, so capture it first.
proc cleanupTests {} {
set failed [expr {$::tcltest::numTests(Failed) > 0}]
uplevel 1 ::tcltest::cleanupTests
if {$failed} then {exit 1}
}
After some research, I could make it work, even though several factors were against me:
I have to use an old TCL version (8.5) with tcltest version 2.3.4, in which runAllTests returns nothing;
I forgot to write cleanupTests at the end of test scripts, as the documentation is not really clear about its usage. (It is not clearer now. I just figured out it is needed if you want to get your tests run by runAllTests, which is really not obvious).
And here is my solution, mostly based on Hai's DevBits blog post:
all.tcl
package require tcltest
::tcltest::configure (...)
proc ::tcltest::cleanupTestsHook {} {
variable numTests
set ::exitCode [expr {$numTests(Total) == 0 || $numTests(Failed) > 0}]
}
::tcltest::runAllTests
exit $exitCode
Some thoughts about it:
I added $numTests(Total) == 0 as a failure condition: this means that no tests was found, which is clearly an erroneous condition;
This doesn't catch exceptions in the configuration of the tests, for instance a source command that points to a non-existing file, revealing some failure in tests scaffolding. This would be catched as error in other test framewords (ah, pytest, I miss you!)
We have a project managed in Gitlab, with CI pipeline for builds and tests (pytest, Google tests). Two or three of our test cases in Google tests fail. But Gitlab consider that the test stage is successful. Is it because the success percentage is more than 90% (an arbitrary value) ? Is there a way to make the stage (and thus the complete pipeline) fail if we don't get 100% of success ?
Here is a screenshot of the pipeline summary:
Here is the yml script of the stage:
test_unit_test:
stage: test
needs: ["build", "build_unit_test"]
image: $DOCKER_IMAGE
rules:
- if: '$CI_PIPELINE_SOURCE != "merge_request_event"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script: |
ZIPNAME=`cat _VERSION_.txt`
./scripts/gitlab-ci/stage-unittests.sh test_unit_test_report.xml $ZIPNAME
artifacts:
reports:
junit: test_unit_test_report.xml
expire_in: 1 week
Thank you for any help.
Regards.
Gitlab CI/CD jobs don't care what the script areas are doing (so they don't look at, for example, test pass percentages). The only they things use to determine if a job passed or failed are exit codes and the allow_failure keyword.
After each command in the before_script, script, and after_script sections are executed, the Gitlab Runner checks the exit code of the command to see if it is 0. If it is non-zero, the command is considered a failure, and if the allow_failure keyword is not set to true for the job, the job fails.
So, for your job, even though the tests are failing, somehow the script is existing with exit code 0, meaning the command itself finished successfully. The command in this case is:
ZIPNAME=$(cat _VERSION_.txt)
./scripts/gitlab-ci/stage-unittests.sh test_unit_test_report.xml $ZIPNAME
NOTE: I replaced your backticks '`' with the $(command) syntax explained here which does the same thing (execute this command) but has some advantages over '`command`' including nesting and easier use in markdown where '`' indicates code formatting.
So, since you are calling a script (./scripts/gitlab-ci/stage-unittests.sh) to run your tests, that script itself is finishing successfully, so the job finishes successfully. Take a look at that script to see if you can tell why it finishes successfully even though the tests fail.
I'd like to get the number of tests that were run with go test, as kind of a checksum to detect if all the tests are running. Since Go relies on filenames and method names to determine what's a test, it's easy to mistype something, which would mean the test would silently be skipped.
I think that the gotestsum tool is close to what you are looking for.
It is a wrapper around go test that prints formatted test output and a summary of the test run.
Default go test:
go test ./...
? github.com/marco-m/timeit/cmd/sleepit [no test files]
ok github.com/marco-m/timeit/cmd/timeit 0.601s
Default gotestsum:
gotestsum
∅ cmd/sleepit
✓ cmd/timeit (cached)
DONE 11 tests in 0.273s <== see here
Checkout the documentation and the built-in help, it is well-written.
In my experience, gotestsum (and the other tools by the same organization) is good. For me, it is also very important to be able to use the standard Go test package, without other "test frameworks". gotestsum allows me to do so.
On the other hand, to really satisfy your requirement (print the number of declared tests and verify that that number is actually ran), you would need something like TAP, the Test Anything Protocol, which works for any programming language:
1..4 <== see here
ok 1 - Input file opened
not ok 2 - First line of the input valid
ok 3 - Read the rest of the file
not ok 4 - Summarized correctly # TODO Not written yet
TAP actually is very nice and simple. I remember there was a Go port, tap-go, but it is now marked as archived.
The go test command has support for the -c flag, described as follows:
-c Compile the test binary to pkg.test but do not run it.
(Where pkg is the last element of the package's import path.)
As far as I understand, generating a binary like this is the way to run it interactively using GDB. However, since the test binary is created by combining the source and test files temporarily in some /tmp/ directory, this is what happens when I run list in gdb:
Loading Go Runtime support.
(gdb) list
42 github.com/<username>/<project>/_test/_testmain.go: No such file or directory.
This means I cannot happily inspect the Go source code in GDB like I'm used to. I know it is possible to force the temporary directory to stay by passing the -work flag to the go test command, but then it is still a huge hassle since the binary is not created in that directory and such. I was wondering if anyone found a clean solution to this problem.
Go 1.5 has been released, and there is still no officially sanctioned Go debugger. I haven't had much success using GDB for effectively debugging Go programs or test binaries. However, I have had success using Delve, a non-official debugger that is still undergoing development: https://github.com/derekparker/delve
To run your test code in the debugger, simply install delve:
go get -u github.com/derekparker/delve/cmd/dlv
... and then start the tests in the debugger from within your workspace:
dlv test
From the debugger prompt, you can single-step, set breakpoints, etc.
Give it a whirl!
Unfortunately, this appears to be a known issue that's not going to be fixed. See this discussion:
https://groups.google.com/forum/#!topic/golang-nuts/nIA09gp3eNU
I've seen two solutions to this problem.
1) create a .gdbinit file with a set substitute-path command to
redirect gdb to the actual location of the source. This file could be
generated by the go tool but you'd risk overwriting someone's custom
.gdbinit file and would tie the go tool to gdb which seems like a bad
idea.
2) Replace the source file paths in the executable (which are pointing
to /tmp/...) with the location they reside on disk. This is
straightforward if the real path is shorter then the /tmp/... path.
This would likely require additional support from the compiler /
linker to make this solution more generic.
It spawned this issue on the Go Google Code issue tracker, to which the decision ended up being:
https://code.google.com/p/go/issues/detail?id=2881
This is annoying, but it is the least of many annoying possibilities.
As a rule, the go tool should not be scribbling in the source
directories, which might not even be writable, and it shouldn't be
leaving files elsewhere after it exits. There is next to nothing
interesting in _testmain.go. People testing with gdb can break on
testing.Main instead.
Russ
Status: Unfortunate
So, in short, it sucks, and while you can work around it and GDB a test executable, the development team is unlikely to make it as easy as it could be for you.
I'm still new to the golang game but for what it's worth basic debugging seems to work.
The list command you're trying to work can be used so long as you're already at a breakpoint somewhere in your code. For example:
(gdb) b aws.go:54
Breakpoint 1 at 0x61841: file /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go, line 54.
(gdb) r
Starting program: /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.test
[snip: some arnings about BinaryCache]
Breakpoint 1, github.com/stellar/deliverator/aws.imageIsNewer (latest=0xc2081fe2d0, ami=0xc2081fe3c0, ~r2=false)
at /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go:54
54 layout := "2006-01-02T15:04:05.000Z"
(gdb) list
49 func imageIsNewer(latest *ec2.Image, ami *ec2.Image) bool {
50 if latest == nil {
51 return true
52 }
53
54 layout := "2006-01-02T15:04:05.000Z"
55
56 amiCreationTime, amiErr := time.Parse(layout, *ami.CreationDate)
57 if amiErr != nil {
58 panic(amiErr)
This is just after running the following in the aws subdir of my project:
go test -c
gdb aws.test
As an additional caveat, it does seem very selective about where breakpoints can be placed. Seems like it has to be an expression but that conclusion is only via experimentation.
If you're willing to use tools besides GDB, check out godebug. To use it, first install with:
go get github.com/mailgun/godebug
Next, insert a breakpoint somewhere by adding the following statement to your code:
_ = "breakpoint"
Now run your tests with the godebug test command.
godebug test
It supports many of the parameters from the go test command.
-test.bench string
regular expression per path component to select benchmarks to run
-test.benchmem
print memory allocations for benchmarks
-test.benchtime duration
approximate run time for each benchmark (default 1s)
-test.blockprofile string
write a goroutine blocking profile to the named file after execution
-test.blockprofilerate int
if >= 0, calls runtime.SetBlockProfileRate() (default 1)
-test.count n
run tests and benchmarks n times (default 1)
-test.coverprofile string
write a coverage profile to the named file after execution
-test.cpu string
comma-separated list of number of CPUs to use for each test
-test.cpuprofile string
write a cpu profile to the named file during execution
-test.memprofile string
write a memory profile to the named file after execution
-test.memprofilerate int
if >=0, sets runtime.MemProfileRate
-test.outputdir string
directory in which to write profiles
-test.parallel int
maximum test parallelism (default 4)
-test.run string
regular expression to select tests and examples to run
-test.short
run smaller test suite to save time
-test.timeout duration
if positive, sets an aggregate time limit for all tests
-test.trace string
write an execution trace to the named file after execution
-test.v
verbose: print additional output