Build single test in go - unit-testing

I have two tests in my project, I would like to build a single test, place the resulting binary in a container, run it, and then attach a debugger.
Is this possible?
package dataplatform
import "testing"
func TestA(t *testing.T) {
// test A
}
func TestRunCommand(t *testing.T) {
// Test B
}

You may use -run <regexp> to limit (filter) the tests to run. So for example if you want to run only the TestA() test, you may do it like this:
go test -run TestA
Actually the above will run all tests whose names contain TestA, so to be explicit, it would be:
go test -run ^TestA$
To not run the tests but generate the test binary, you may use the -c option:
go test -c
This won't run the tests, but compile a binary which when executed will run the tests.
The problem is that you can't combine these options, e.g. running
go test -c -run TestA
Will generate a binary which when executed will run all tests.
The truth is that the generated binary accepts the same parameters as go test, so you may pass -run TestA to the generated binary, but you must prefix the params with test:
Each of these flags is also recognized with an optional 'test.' prefix, as in -test.v. When invoking the generated test binary (the result of 'go test -c') directly, however, the prefix is mandatory.
So if the name of the generated test binary is my.test, run it like:
./my.test -test.run TestA
For more options and documentation, run go help test, or visit the official documentation:
Command Go
And the relevant section:
Command Go: Testing flags

Related

Golang: Compile all tests across a repo without executing them

Context
I have a repo in which multiple teams contribute integration tests.
All these tests are hidden behind //go:build integration flags, so if I want go to see or run them, I need to pass the -build integration flag to the test command.
Purpose
What I’m trying to accomplish is to compile all tests across the entirety of this repo without actually executing them (would take a long time) so I can catch build errors introduced by PRs that the default go test compilation and execution would not catch.
I see the -c flag:
-c
Compile the test binary to pkg.test but do not run it
(where pkg is the last element of the package's import path).
The file name can be changed with the -o flag.
However… one cannot use the -c flag with the -build flag:
$ go test -build=integration -c ./integrationtests/client/admin-api
go: unknown flag -build=integration cannot be used with -c
Also... one cannot use the -c flag across multiple packages:
$ go test -c ./...
cannot use -c flag with multiple packages
Any ideas?
You can use the go test -run flag and pass it a pattern you know will never match:
go test -run=XXX_SHOULD_NEVER_MATCH_XXX ./...
ok mypkg 0.731s [no tests to run]
this will catch any test compilation errors - and if there are none - no tests will be run.
If you need to pass any build tags that are typically passed during your go build process (e.g. go build -tags mytag), you can do the same during go test:
go test -tags mytag -run=XXX_SHOULD_NEVER_MATCH_XXX ./...
Full details on the -run flag from the inline (go help testflag) docs:
-run regexp
Run only those tests, examples, and fuzz tests matching the regular
expression. For tests, the regular expression is split by unbracketed
slash (/) characters into a sequence of regular expressions, and each
part of a tests identifier must match the corresponding element in
the sequence, if any. Note that possible parents of matches are
run too, so that -run=X/Y matches and runs and reports the result
of all tests matching X, even those without sub-tests matching Y,
because it must run them to look for those sub-tests.

How to pass command-line arguments in CTest at runtime?

I would like to pass parameters to our Catch2 tests via ctest when running through bamboo or jenkins so that they product junit test results. So I would like to do something like:
make test ARGS="-r junit -o test_results.xml"
That would forward these on to my test:
unittest -r junit -o test_results.xml
That way when I run make tests it just runs the tests normally which pretty prints results to the console.
I know args can be added in the add_test() command but I'm looking for something more dynamic.
I'm hoping there is a way to do this in modern CMake.

Why does `go test -run NotExist` pass?

If I run the following
go test -run NotExist
The response is PASS. Seeing as my test file does not contain a test called TestNotExist I would expect the command above to return FAIL
Without the -run option go test runs all tests. You use the -run option to not run all tests; to filter out, to exclude tests (and you do this in the form of requiring names of non-excludable tests to match a regexp pattern - but this is irrelevant form the point of discussion):
Command go, Test packages:
By default, go test needs no arguments. It compiles and tests the package with source in the current directory, including tests, and runs the tests.
Description of testing flags:
-run regexp
Run only those tests and examples matching the regular expression.
It is a perfectly "normal" outcome that a filtering filters out all tests, that no tests remains in the set of tests that still need to be executed.
When no tests FAIL, it is considered as the test run PASSes. If no tests match, no tests will run and no tests will FAIL, and thus the test run will PASS.

CMake and CTest's default "test" command skips a specially named test

I'm using CTest with CMake to run some tests. I use the enable_testing() command which provides me with a default command for make test. All of the tests in my subdirectory are accounted for (by doing an add_test command) and make test works great, except one problem.
There is a certain test, which I've named skip_test, that I do NOT want being run when I do make test. I would like to add a custom target so I can run make skip_test and it will run that test.
I can do this by doing add_custom_target(skip_test ...) and providing CTest with the -R flag and telling it to look for files containing "skip_test" in their name. This also seems to work. My problem now is: how can I get the make test command to ignore skip_test?
If I try commenting out enable_testing and adding my own add_custom_target(test ....), I get "No tests found!!!" now for either make test or make skip_test. I also tried making a Custom CTest file and adding set(CTEST_CUSTOM_TESTS_IGNORE skip_test). This worked so that now make test ignored "skip_test", but now running make skip_test responds with "no tests found!!!".
Any suggestions would be appreciated!
I actually used a different solution. Here is what I did. For the tests that I wanted to exclude, I used the following command when adding them:
"add_test( ..... CONFIGURATIONS ignore_flag)" where ignore_flag is whatever phrase you want. Then, in my CMakeLists.txt, when I define a custom target
add_custom_target( ignore_tests ...)
I give it ctest .... -C ignore_flag
Now, make test WILL skip these tests! make ignore_Tests will run the ignored tests + the un-ignored tests, which I'm okay with.
I'm not sure of a way to do this entirely via CTest, but since you've tagged this question with "googletest", I assume you're using that as your test framework. So, you could perhaps make use of Gtest's ability to disable tests and also to run disabled tests.
By changing the test(s) in question to have a leading DISABLED_ in their name(s), these won't be run by default when you do make test.
You can then add your custom target which will invoke your test executable with the appropriate Gtest flags to run only the disabled tests:
add_custom_target(skip_test
MyTestBinary --gtest_filter=*DISABLED_* --gtest_also_run_disabled_tests VERBATIM)
It's a bit of an abuse of the Gtest functionality - it's really meant to be used to temporarily disable tests while you refactor whatever to get the test passing again. This beats just commenting out the test since it continues to compile it, and it gives a nagging reminder after running the suite that you have disabled tests.

How do you create tests for "make check" with GNU autotools

I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well.
Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project?
To make test run when you issue make check, you need to add them to the TESTS variable
Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this:
TESTS=my-test-executable
It should then be automatically run when you make check, and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable:
TESTS=my-first-test my-second-test my-third-test
and they will all get run.
I'm using Check 0.9.10
configure.ac
Makefile.am
src/Makefile.am
src/foo.c
tests/check_foo.c
tests/Makefile.am
./configure.ac
PKG_CHECK_MODULES([CHECK], [check >= 0.9.10])
./tests/Makefile.am for test codes
TESTS = check_foo
check_PROGRAMS = check_foo
check_foo_SOURCES = check_foo.c $(top_builddir)/src/foo.h
check_foo_CFLAGS = #CHECK_CFLAGS#
and write test code, ./tests/check_foo.c
START_TEST (test_foo)
{
ck_assert( foo() == 0 );
ck_assert_int_eq( foo(), 0);
}
END_TEST
/// And there are some tcase_xxx codes to run this test
Using check you can use timeout and raise signal. it is very helpful.
You seem to be asking 2 questions in the first paragraph.
The first is about adding tests to the GNU autotools toolchain - but those tests, if I'm understanding you correctly, are for both validating that the environment necessary to build your application exists (dependent libraries and tools) as well as adapt the build to the environment (platform specific differences).
The second is about unit testing your C++ application and where to invoke those tests, you've proposed doing so from the autotools tool chain, presumably from the configure script. Doing that isn't conventional though - putting a 'test' target in your Makefile is a more conventional way of executing your test suite. The typical steps for building and installing an application with autotools (at least from a user's perspective, not from your, the developer, perspective) is to run the configure script, then run make, then optionally run make test and finally make install.
For the second issue, not wanting cppunit to be a dependency, why not just distribute it with your c++ application? Can you just put it right in what ever archive format you're using (be it tar.gz, tar.bz2 or .zip) along with your source code. I've used cppunit in the past and was happy with it, having used JUnit and other xUnit style frameworks.
Here is a method without dependencies:
#src/Makefile.am
check_PROGRAMS = test1 test2
test1_SOURCES = test/test1.c code_needed_to_test1.h code_needed_to_test1.c
test2_SOURCES = test/test2.c code_needed_to_test2.h code_needed_to_test2.c
TESTS = $(check_PROGRAMS)
The make check will naturally work and show formatted and summarized output:
$ make check
...
PASS: test1
PASS: test2
============================================================================
Testsuite summary for foo 1.0
============================================================================
# TOTAL: 2
# PASS: 2
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
When you do a make dist nothing from src/test/* will be
in the tarball. Test code is not in the dist, only source will be.
When you do a make distcheck it will run make check and run your tests.
You can use Automake's TESTS to run programs generated with check_PROGRAMS but this will assume that you are using a log driver and a compiler for the output. It is probably easier to still use check_PROGRAMS but to invoke the test suite using a local rule in the Makefile:
check_PROGRAMS=testsuite
testsuite_SOURCES=...
testsuite_CFLAGS=...
testsuite_LDADD=...
check-local:
./testsuite