I have a Rust test which delegates to a C++ test suite using doctest and wants to pass command-line parameters to it. My first attempt was
// in mod ffi
pub fn run_tests(cli_args: &mut [String]) -> bool;
#[test]
fn run_cpp_test_suite() {
let mut cli_args: Vec<String> = env::args().collect();
if !ffi::run_tests(
cli_args.as_mut_slice(),
) {
panic!("C++ test suite reported errors");
}
}
Because cargo test help shows
USAGE:
cargo.exe test [OPTIONS] [TESTNAME] [-- <args>...]
I expected
cargo test -- --test-case="X"
to let run_cpp_test_suite access and pass on the --test-case="X" parameter. But it doesn't; I get error: Unrecognized option: 'test-case' and cargo test -- --help shows it has a fixed set of options
Usage: --help [OPTIONS] [FILTER]
Options:
--include-ignored
Run ignored and not ignored tests
--ignored Run only ignored tests
...
My other idea was to pass the arguments in an environment variable, that is
DOCTEST_ARGS="--test-case='X'" cargo test
but then I need to somehow split that string into arguments (handling at least spaces and quotes correctly) either in Rust or in C++.
There are two pieces of Rust toolchain involved when you run cargo test.
cargo test itself looks for all testable targets in your package or workspace, builds them with cfg(test), and runs those binaries. cargo test processes the arguments to the left of the --, and the arguments to the right are passed to the binary.
Then,
Tests are built with the --test option to rustc which creates an executable with a main function that automatically runs all functions annotated with the #[test] attribute in multiple threads. #[bench] annotated functions will also be run with one iteration to verify that they are functional.
The libtest harness may be disabled by setting harness = false in the target manifest settings, in which case your code will need to provide its own main function to handle running tests.
The “libtest harness” is what rejects your extra arguments. In your case, since you're intending to run an entire other test suite, I believe it would be appropriate to disable the harness.
Move your delegation code to its own file, conventionally located in tests/ in your package directory:
Cargo.toml
src/
lib.rs
...
tests/
cpp_test.rs
Write an explicit target section in your Cargo.toml for it, with harness disabled:
[[test]]
name = "cpp_test"
# path = "tests/cpp_test.rs" # This is automatic; you can use a different path if you really want to.
harness = false
In cpp_test.rs, instead of writing a function with the #[test] attribute, write a normal main function which reads env::args() and calls the C++ tests.
[Disclaimer: I'm familiar with these mechanisms because I've used Criterion benchmarking (which similarly requires disabling the default harness) but I haven't actually written a test with custom arguments the way you're looking for. So, some details might be wrong. Please let me know if anything needs correcting.]
In addition to Kevin Reid's answer, if you don't want to write your own test harness, you can use the shell-words crate to split an environment variable into individual arguments following shell rules:
let args = var ("DOCTEST_ARGS").unwrap_or_else (|_| String::new());
let args = shell_words::split (&args).expect ("failed to parse DOCTEST_ARGS");
Command::new ("cpptest")
.args (args)
.spawn()
.expect ("failed to start subprocess")
.wait()
.expect ("failed to wait for subprocess");
Related
The documentation for Rust tests indicates that unit tests are run when the module that defines them is in scope, with the test configuration flag activated:
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
let result = 2 + 2;
assert_eq!(result, 4);
}
}
In a large project, however, it is frequent to confine a test module to a different file, so that the test module is pointed to differently:
#[cfg(test)]
#[path = "unit_tests/tests.rs"]
mod tests;
With this setup, in a large project, it may happen that someone deletes the reference to src/unit_tests/tests.rs (i.e. the code block above), without deleting the test file itself. This usually indicates a bug:
either the removal was intentional, and the file src/unit_tests/tests.rs should not remain in the repository,
or it wasn't, in which case the tests in src/unit_tests/tests.rs are meant to be run and aren't, and CI should be vocal about that.
I'd like to add a verification that there are no such unmoored test files in my crate, to the CI of my project. Is there any tooling in the rust ecosystem that could help me detect those unlinked test files programmatically?
(integration tests are automatically compiled if found in the tests/ repository of the crate, but IIUC, that does not apply to unit tests)
There is an issue asking for cargo to do this check, cargo (publish) should complain about unused source files, and from that found the cargo-modules crate can do this. For example:
//! src/lib.rs
#[cfg(test)]
mod tests;
//! src/tests.rs
#[test]
fn it_works() {
let result = 2 + 2;
assert_eq!(result, 4);
}
> cargo modules generate tree --lib --with-orphans --cfg-test
crate mycrate
└── mod tests: pub(crate) #[cfg(test)]
However, if I remove the #[cfg(test)] mod tests; the output would look like this:
> cargo modules generate tree --lib --with-orphans --cfg-test
crate mycrate
└── mod tests: orphan
You would be able to grep for "orphan" and fail your check based on that.
I will say though that this tool seems very slow from my small tests. The discussion in the linked issue indicate that a better way would have to be used if implemented in cargo.
You will also have to keep in mind any other conditional compilation may yield false positives (i.e. you legitimately include or exclude certain modules based on architecture, OS, or features). There is an --all-features flag that could help with some of that.
There is a very old rustc issue on the subject, which remains open.
However one of the commenters provides a hack:
One possible way to implement this (including as a third-party tool) is to parse the .d files rustc emits for cargo, and look for any .rs files that aren't mentioned.
rg --no-filename '^[^/].*\.rs:$' target/debug/deps/*.d | sed 's/:$//' | sort -u | diff - <(fd '\.rs$' | sort -u)
Apparently the .d files are
makefile-compatible dependency lists
and so I assume they should list the relationship between all rust files in the project. Which is why if one is missing, it's not seen by cargo / rustc.
I often wrote my test code in the main function while developing an API but because D has integrated unittest I want to start using them.
My current work flow is the following, I have a script that watches for file changes in any .d files, if the scripts finds a modified file it will run dub build
The problem is that dub build doesn't seem to build the unittest
module foo
struct Bar{..}
unittest{
...
// some syntax error here
...
}
It only compiles the unittests if I explicitly run dub test. But I don't want to run and compile them at the same time.
The second problem is that I want to be able to run unittests for a single module for example
dub test module foo
Would this be possible?
You can program a custom test runner using the trait getUnittests.
getUnitTests
Takes one argument, a symbol of an aggregate (e.g. struct/class/module). The result is a tuple of all the unit test functions of that aggregate. The functions returned are like normal nested static functions, CTFE will work and UDA's will be accessible.
in your main() you should be able to write something that takes an arbitrary number of module:
void runModuleTests(Modules...)()
{
static if (Modules.length > 1)
runModuleTests!(Modules[1..$]);
else static if (Modules.length == 1)
foreach(test; __traits(getUnitTests, Modules[0])) test;
}
of course the switch -unittest must be passed to dmd
Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.
I am writing unit tests for my golang code, and there are a couple methods that I would like to be ignored when coverage is calculated. Is this possible? If so, how?
One way to do it would be to put the functions you don't want tested in a separate go file, and use a build tag to keep it from being included during tests. For example, I do this sometimes with applications where I have a main.go file with the main function, maybe a usage function, etc., that don't get tested. Then you can add a test tag or something, like go test -v -cover -tags test and the main might look something like:
//+build !test
package main
func main() {
// do stuff
}
func usage() {
// show some usage info
}
I was trying to use gocheck to test my go code. I was guiding myself with the following example (similar to the one provided by their website):
package hello_test
import (
"testing"
gocheck "gopkg.in/check.v1"
)
// Hook up gocheck into the "go test" runner.
func Test(t *testing.T) {
gocheck.TestingT(t)
}
type MySuite struct{} //<==== Does the struct have to be named that way, what if we have multiple of these and register them, is it a problem?
var _ = gocheck.Suite(&MySuite{}) // <==== What does this line do?
func (s *MySuite) TestHelloWorld(c *gocheck.C) {
c.Assert(42, gocheck.Equals, "42")
c.Check(42, gocheck.Equals, 42)
}
However, there are some lines I am not sure I understand even after reading the documentation. Why is the line type MySuite struct{} needed and even more of an interesting line, why is var _ = gocheck.Suite(&MySuite{}) needed? The first one, its easy to infer that one probably has to declare that struct first and create functions that will run the tests if implemented with the signature as shown. However, the second line beats me. I have literally no idea why its needed. The documentation says:
Suite registers the given value as a test suite to be run. Any methods starting with the Test prefix in the given value will be considered as a test method.
However, I am not sure about a lot of things. For instance, is there a problem if I run this function with multiple MySuite structs in the same file? Is there anything special about the type MySuite struct? Could the gocheck testing suite work even with some different struct being registered? Basically, how many times can we register a struct in one file and will it still work?
The gocheck.Suite function has the side effect of registering the given suite value with the gocheck package. Internally, it just adds the the suite to a slice of registered test suites. You could get the same effect with:
func init() {
gocheck.Suite(&MySuite{})
}
Either form should work, so it is just a matter of style.
The tests in the registered test suites are run when you call gocheck.TestingT. You do this in your test called Test, which will be picked up by Go's testing framework. This is how gocheck tests are integrated into the testing framework. Note that you only need a single invocation of TestingT to run all test suites: not one for each test suite.