Within Rusts build-tool cargo, I can define a function for a test-case simply this way:
#[test]
fn test_something() {
assert_eq!(true, true);
}
Running with cargo test outputs:
Running unittests src/lib.rs (target/debug/deps/example-14178b7a1a1c9c8c)
running 1 test
test test::test_something ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
So far, so good. Now I have the special case, that test cases in my project are defined in files. They can be executed using a special testcase() function that performs all steps to run the test, and do specific assert_eq!() calls.
Now I want to achieve the following:
Read all files from a tests/ folder
Run testcase() on every file
Have every file as a single test case in cargo test
Latter one is the problem. This is the function (as test) to run all tests from the folder using the testcase()-function.
#[test]
// Run all tests in the tests/-folder
fn tests() {
let cases = std::fs::read_dir("tests").unwrap();
for case in cases {
testcase(case.unwrap().path().to_str().unwrap());
}
}
A run of cargo test should print the following, given the testcases a.test, b.test and c.test:
Running unittests src/lib.rs (target/debug/deps/example-14178b7a1a1c9c8c)
running 3 test
test test::tests::tests/a.test ... ok
test test::tests::tests/b.test ... ok
test test::tests::tests/c.test ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Is this somehow possible?
With Rust's built-in test harness, there is no way to declare a single test case other than with the #[test] attribute. In order to get the result you want, you would need to write a proc-macro which reads your directory of tests and generates a #[test] function for each one.
Instead, it will probably be more practical to write a test which uses your own harness code. In your Cargo.toml, define a test target with it disabled:
[[test]]
name = "lang_tests"
harness = false
Then in tests/lang_tests.rs (the tests/ directory goes next to src/ and is already known to Cargo), write a program with an ordinary main() that runs tests:
fn main() {
let cases = std::fs::read_dir("tests").unwrap();
let mut passed = 0;
let mut failed = 0;
for case in cases {
if testcase(case.unwrap().path().to_str().unwrap()) {
eprintln!("...");
passed += 1;
} else {
eprintln!("...");
failed += 1;
}
}
eprintln!("{passed} passed, {failed} failed");
if failed > 0 {
std::process::exit(1);
}
}
Of course there's a bunch of further details to implement here — but I hope this illustrates how to write a custom test harness that does the thing you want to do. Again, you cannot do it with the default test harness except by using macro-generated code, which is probably not worthwhile in this case.
Related
I am implementing a virtual machine in Rust as an exercise. Some integration tests run small programs on that virtual machine and make assertions about the final result.
However, it can happen that a bug leads to the machine getting stuck in an infinite loop, meaning that cargo test never finishes.
Is there a way to make a unit / integration test fail after a given amount of time? Maybe something similar to #[should_panic], but instead, say, #[time_limit(500)]?
The ntest crate with its #[timeout(<ms>)] macro as mentioned in the comments by SirDarius may be a simpler solution than the following.
As an alternative, there's a crate that allows creation of processes that run closures, which has some test related features. Processes, unlike threads, can be killed if they run too long.
The crate procspawn has this closure feature, and some other nice features useful for tests where timeouts are desired. Another one of these features is serialization of the return value from the code invoked in the child process. The test code can get the result back and check it in a very straightforward way.
The code to be tested:
use std::thread;
use std::time::Duration;
fn runs_long() -> i32
{
thread::sleep(Duration::from_millis(5000));
42
}
Within the same file, we can add some tests with timeouts:
#[cfg(test)]
mod tests {
use std::time::Duration;
use procspawn;
use crate::*;
procspawn::enable_test_support!();
#[test]
fn runs_long_passes()
{
let handle = procspawn::spawn((), |_| runs_long());
match handle.join_timeout(Duration::from_millis(6000)) {
Ok(result) => assert_eq!(result, 42),
Err(e) => panic!("{}", e),
}
}
#[test]
fn runs_long_fails()
{
let handle = procspawn::spawn((), |_| runs_long());
match handle.join_timeout(Duration::from_millis(1000)) {
Ok(result) => assert_eq!(result, 42),
Err(e) => panic!("{}", e),
}
}
}
And running the tests with cargo test on the command line, we get:
running 3 tests
test tests::procspawn_test_helper ... ok
test tests::runs_long_fails ... FAILED
test tests::runs_long_passes ... ok
failures:
---- tests::runs_long_fails stdout ----
thread 'tests::runs_long_fails' panicked at 'process spawn error: timed out', src/main.rs:50:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
tests::runs_long_fails
test result: FAILED. 2 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 6.01s
error: test failed, to rerun pass '--bin procspawn'
To enable the test support feature, include the following in your Cargo.toml file:
[dependencies]
procspawn = { version = "0.10", features = ["test-support"] }
Using Google Test, I want to test the behaviour of a ClientListener.AcceptRequest method:
class ClientListener {
public:
// Clients can call this method, want to test that it works
Result AcceptRequest(const Request& request) {
queue_.Add(request);
... blocks waiting for result ...
return result;
}
private:
// Executed by the background_thread_;
void ProcessRequestsInQueue() {
while (true) {
Process(queue_.PopEarliest());
}
}
MyQueue queue_;
std::thread background_thread_ = thread([this] {ProcessRequestsInQueue();});
};
The method accepts a client request, queues it, blocks waiting for a result, returns a result when available.
The result is available when the background thread processes the corresponding request from a queue.
I have a test which looks as follows:
TEST(ListenerTest, TwoRequests) {
ClientListener listener;
Result r1 = listener.AcceptClientRequest(request1);
Result r2 = listener.AcceptClientRequest(request2);
ASSERT_EQ(r1, correctResultFor1);
ASSERT_EQ(r2, correctResultFor2);
}
Since the implementation of a ClientListener class involves multiple threads, this test might pass on one attempt but fail on another. To increase the chance of capturing a bug, I run the test multiple times:
TEST_P(ListenerTest, TwoRequests) {
... same as before ...
}
INSTANTIATE_TEST_CASE_P(Instantiation, ListenerTest, Range(0, 100));
But now make test command treats each parameterised instantiation as a separate test,
and in the logs, I see 100 tests:
Test 1: Instantiation/ListenerTest.TwoRequests/1
Test 2: Instantiation/ListenerTest.TwoRequests/2
...
Test 100: Instantiation/ListenerTest.TwoRequests/100
Given that I do not use the parameter value, is there a way to rewrite the testing code such that the make test command would log a single test executed 100 times, rather than 100 tests?
Simple answer: use --gtest_repeat when executing tests would do the trick (default is 1).
Longer answer: unit tests shouldn't be used for this kind of tests. GTest is thread-safe by design (as stated in their README), but this doesn't mean it is a good tool to perform such tests. Maybe it is a good starting point to actually begin working on real integration tests, I really recommend Python's behave framework for this purpose.
When I run the following program with cargo test:
use std::panic;
fn assert_panic_func(f: fn() -> (), msg: String) {
let result = panic::catch_unwind(|| {
f();
});
assert!(result.is_err(), msg);
}
macro_rules! assert_panic {
($test:expr , $msg:tt) => {{
fn wrapper() {
$test;
}
assert_panic_func(wrapper, $msg.to_string())
}};
}
fn main() {
println!("running main()");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn t_1() {
assert_panic!(
panic!("We are forcing a panic"),
"This is asserted within function (raw panic)."
);
// assert_panic!(true, "This is asserted within function (raw true).");
}
}
I get the expected output:
running 1 test
test tests::t_1 ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
If I uncomment the second assert_panic!(...) line, and rerun cargo test, I get the following output:
running 1 test
test tests::t_1 ... FAILED
failures:
---- tests::t_1 stdout ----
thread 'tests::t_1' panicked at 'We are forcing a panic', src/lib.rs:29:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tests::t_1' panicked at 'This is asserted within function (raw true).', src/lib.rs:7:5
failures:
tests::t_1
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
The second panic is legitimate, and that is what I am looking for, but the first panic seems to be being triggered by the line that was not triggering a panic in the first invocation.
What is going on, and how do I fix it?
The stderr output
thread 'tests::t_1' panicked at 'We are forcing a panic', src/main.rs:30:23
is logged independently of whether a panic is caught, the test running just does not show any logged output unless a test fails. To suppress that text entirely, you would need to separately swap out the panic notification hook using std::panic::set_hook.
fn assert_panic_func(f:fn()->(), msg: String) {
let previous_hook = panic::take_hook();
// Override the default hook to avoid logging panic location info.
panic::set_hook(Box::new(|_| {}));
let result = panic::catch_unwind(|| {
f();
});
panic::set_hook(previous_hook);
assert!(result.is_err(), msg );
}
All that said, I second #SCappella's answer about using #[should_panic].
Even if std::panic::catch_unwind catches the panic, any output from that panic will be printed. The reason you don't see anything with the first test (with the commented out second panic) is that cargo test doesn't print output from successful tests.
To see this behavior more clearly, you can use main instead of a test. (playground)
fn main() {
let _ = std::panic::catch_unwind(|| {
panic!("I don't feel so good Mr. catch_unwind");
});
println!("But the execution of main still continues.");
}
Running this gives the output
thread 'main' panicked at 'I don't feel so good Mr. catch_unwind', src/main.rs:3:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
But the execution of main still continues.
Note that panics usually output to stderr, rather than stdout, so it's possible to filter these out.
See also Suppress panic output in Rust when using panic::catch_unwind.
I'm not sure if this is what you're trying to do, but if you want to ensure that a test panics, use the should_panic attribute. For example,
#[test]
#[should_panic]
fn panics() {
panic!("Successfully panicked");
}
At the time I was not aware that unit tests would suppress output messages. I later became aware of the suppression of output messages when researching why println!(...) would not work within unit tests. That this might also be an answer to why panics sometimes display and sometimes do not does make sense.
Nonetheless, it does seem to me to be perverse that panics produce output even when I explicitly tell Rust that I wish to prevent the panic from having any effect, but if that is what Rust does, however perverse that might seem, then one has to live with it.
I was aware of the #[should_panic] attribute, but was not happy with this solution for two reasons:
Firstly, it requires that each test becomes a separate function, whereas I tend to put a number of closely related tests (many of the tests being no more than a single assert!(...) statement) into one function.
Secondly, it would be nice to have a single model to express each test. To my mind, testing whether an expression raises a panic (or fails to raise a panic) is no different from testing whether the result is equal to, or unequal to, some particular value. It makes far more sense to me to create a single model to express both tests, hence my desire to have an assert_panic!(...) macro that behaved analogous to the assert!(...) or assert_eq!(...) macro. It seems that this is simply not an achievable objective within Rust.
Thank you for clearing that up.
Since the test-function aborts on a failure, one cannot simply clean up at the end of the function under test.
From testing frameworks in other languages, there's usually a way to setup a callback that handles cleanup at the end of each test-function.
Since the test-function aborts on a failure, one cannot simply clean up at the end of the function under test.
Use RAII and implement Drop. It removes the need to call anything:
struct Noisy;
impl Drop for Noisy {
fn drop(&mut self) {
println!("I'm melting! Meeeelllllttttinnnng!");
}
}
#[test]
fn always_fails() {
let my_setup = Noisy;
assert!(false, "or else...!");
}
running 1 test
test always_fails ... FAILED
failures:
---- always_fails stdout ----
thread 'always_fails' panicked at 'or else...!', main.rs:12
note: Run with `RUST_BACKTRACE=1` for a backtrace.
I'm melting! Meeeelllllttttinnnng!
failures:
always_fails
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured
I want to test a few functions that are included in my main package, but my tests don't appear to be able to access those functions.
My sample main.go file looks like:
package main
import (
"log"
)
func main() {
log.Printf(foo())
}
func foo() string {
return "Foo"
}
and my main_test.go file looks like:
package main
import (
"testing"
)
func Foo(t testing.T) {
t.Error(foo())
}
when I run go test main_test.go I get
# command-line-arguments
.\main_test.go:8: undefined: foo
FAIL command-line-arguments [build failed]
As I understand, even if I moved the test file elsewhere and tried importing from the main.go file, I couldn't import it, since it's package main.
What is the correct way of structuring such tests? Should I just remove everything from the main package asides a simple main function to run everything and then test the functions in their own package, or is there a way for me to call those functions from the main file during testing?
when you specify files on the command line, you have to specify all of them
Here's my run:
$ ls
main.go main_test.go
$ go test *.go
ok command-line-arguments 0.003s
note, in my version, I ran with both main.go and main_test.go on the command line
Also, your _test file is not quite right, you need your test function to be called TestXXX and take a pointer to testing.T
Here's the modified verison:
package main
import (
"testing"
)
func TestFoo(t *testing.T) {
t.Error(foo())
}
and the modified output:
$ go test *.go
--- FAIL: TestFoo (0.00s)
main_test.go:8: Foo
FAIL
FAIL command-line-arguments 0.003s
Unit tests only go so far. At some point you have to actually run the program. Then you test that it works with real input, from real sources, producing real output to real destinations. For real.
If you want to unit test a thing move it out of main().
This is not a direct answer to the OP's question and I'm in general agreement with prior answers and comments urging that main should be mostly a caller of packaged functions. That being said, here's an approach I'm finding useful for testing executables. It makes use of log.Fataln and exec.Command.
Write main.go with a deferred function that calls log.Fatalln() to write a message to stderr before returning.
In main_test.go, use exec.Command(...) and cmd.CombinedOutput() to run your program with arguments chosen to test for some expected outcome.
For example:
func main() {
// Ensure we exit with an error code and log message
// when needed after deferred cleanups have run.
// Credit: https://medium.com/#matryer/golang-advent-calendar-day-three-fatally-exiting-a-command-line-tool-with-grace-874befeb64a4
var err error
defer func() {
if err != nil {
log.Fatalln(err)
}
}()
// Initialize and do stuff
// check for errors in the usual way
err = somefunc()
if err != nil {
err = fmt.Errorf("somefunc failed : %v", err)
return
}
// do more stuff ...
}
In main_test.go,a test for, say, bad arguments that should cause somefunc to fail could look like:
func TestBadArgs(t *testing.T) {
var err error
cmd := exec.Command(yourprogname, "some", "bad", "args")
out, err := cmd.CombinedOutput()
sout := string(out) // because out is []byte
if err != nil && !strings.Contains(sout, "somefunc failed") {
fmt.Println(sout) // so we can see the full output
t.Errorf("%v", err)
}
}
Note that err from CombinedOutput() is the non-zero exit code from log.Fatalln's under-the-hood call to os.Exit(1). That's why we need to use out to extract the error message from somefunc.
The exec package also provides cmd.Run and cmd.Output. These may be more appropriate than cmd.CombinedOutput for some tests. I also find it useful to have a TestMain(m *testing.M) function that does setup and cleanup before and after running the tests.
func TestMain(m *testing.M) {
// call flag.Parse() here if TestMain uses flags
os.Mkdir("test", 0777) // set up a temporary dir for generate files
// Create whatever testfiles are needed in test/
// Run all tests and clean up
exitcode := m.Run()
os.RemoveAll("test") // remove the directory and its contents.
os.Exit(exitcode)
How to test main with flags and assert the exit codes
#MikeElis's answer got me half way there, but there was a major part missing which Go's own flag_test.go help me figure out.
Disclaimer
You essentially want to run your app and test correctness. So please label this test anyway you want and file it in that category. But its worth trying this type of test out and seeing the benefits. Especially if your a writing a CLI app.
The idea is to run go test as usual, and
Have a unit test run "itself" in a sub-process using the test build of the app that go test makes (see line 86)
We also pass environment variables (see line 88) to the sub-process that will execute the section of code that will run main and cause the test to exit with main's exit code:
if os.Getenv(SubCmdFlags) != "" {
// We're in the test binary, so test flags are set, lets reset it so
// so that only the program is set
// and whatever flags we want.
args := strings.Split(os.Getenv(SubCmdFlags), " ")
os.Args = append([]string{os.Args[0]}, args...)
// Anything you print here will be passed back to the cmd.Stderr and
// cmd.Stdout below, for example:
fmt.Printf("os args = %v\n", os.Args)
// Strange, I was expecting a need to manually call the code in
// `init()`,but that seem to happen automatically. So yet more I have learn.
main()
}
NOTE: If main function does not exit the test will hang/loop.
Then assert on the exit code returned from the sub-process.
// get exit code.
got := cmd.ProcessState.ExitCode()
if got != test.want {
t.Errorf("got %q, want %q", got, test.want)
}
NOTE: In this example, if anything other than the expected exit code is returned, the test outputs the STDOUT and or STDERR from the sub-process, for help with debugging.
See full example here: go-gitter: Testing the CLI
Because you set only one file for the test, it will not use other go files.
Run go test instead of go test main_test.go.
Also change the test function signature Foo(t testing.T) to TestFoo(t *testing.T).
Change package name from main to foobar in both sources.
Move source files under src/foobar.
mkdir -p src/foobar
mv main.go main_test.go src/foobar/
Make sure to set GOPATH to the folder where src/foobar resides.
export GOPATH=`pwd -P`
Test it with
go test foobar