A module contains multiple tests like following:
#[cfg(test)]
mod tests_cases {
/*
#[test]
fn test_1() {
let a = 1;
let b = 2;
println!("{},{}", a, b);
assert_ne!(a, b);
}
*/
#[test]
fn test_2() {
let c = 1;
let d = 1;
println!("currently working on this: {},{}", c, d);
assert_eq!(c, d);
}
}
When working on the second tests with output visible
(cargo test -- --nocapture),
I do not want to see the output of the first tests.
Is there an option to disable the commented unit test? Or is there an option to just run the second unit test?
Another option is to add the #[ignore] attribute.
#[ignore]
#[test]
fn test_1() {
let a = 1;
let b = 2;
println!("{},{}", a, b);
assert_ne!(a, b);
}
This adds a nice colored ignored to the test results.
test tests::test_1 ... ignored
test tests::test_2 ... ok
test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.03s
Source:
Ignoring Some Tests Unless Specifically Requested
If you remove the #[test] attribute from the unwanted test, then cargo test will ignore it.
Related
I'm trying to write some unit tests for the following Kotlin code which uses async to create coroutines:
class ClassToTest(val dependency1: Dependency1, val dependency2: Dependency2) {
fun funToTest(list: List<X>) {
runBlocking {
val deferreds: List<Deferred<Type>> = list.map { item ->
async(Dispatchers.Default) {
val v1 = dependency1.func1(item)
val v2 = dependency2.func2(v1)
someFunc(v2)
}
}
val results = deferreds.awaitAll()
}
}
}
I mocked both dependency1 and dependency2 in the unit testing and provided a list of 2 items as input to the funToTest. I also mocked how the dependencies should return values based on different input, something like below
val slot1 = slot<>()
every {dependency1.func1(capture(slot1))} answers {
if (slot1.captured.field == xxx) {
return1
} else {
return2
}
}
//similar thing for dependency1
// invoke funToTest
However this doesn't seem like working as I'm getting unexpected results which indicated the mocked object didn't return results as desired between two coroutines.
Does anyone have any ideas what went wrong with these code?
///Here is my class
class State {
var state: Int = 10
}
open class Car {
var state:State = State()
fun changState(data: Int = 1) {
setState(data)
}
fun setState(data: Int = 0) {
state.state = data
}
}
/// Here is my Test
#Test
fun `test 1`() {
var mockCar = mockk<Car>()
every { mockCar.changState(any()) } just runs
every { mockCar.setState(any()) } just runs
mockCar.changState(10)
verify(exactly = 1) { mockCar.changState(any()) }
verify { mockCar.setState(any()) }
}
But it fails with this error
################################
java.lang.AssertionError: Verification failed: call 1 of 1: Car(#1).setState(any())) was not called.
Calls to same mock:
Car(#1).changState(10)
############################
You need to remove verify { mockCar.setState(any()) } - there is no way that this will ever be called, because you mocked
every { mockCar.changState(any()) } just runs
This means the stubbed method will do nothing, it just runs, so to speak.
I don't recommend writing tests that only test mocks, because it will lead to a bias that the code is fine when you just use outputs of what you think is correct behavior. Instead, write a separate unit test for Car.
For your use-case a mock is not the intended thing to use, you should be using a spy instead if you mix real method calls with mocked behavior.
I tried to write a unit test in rust, but when I run cargo test I get the following error:
"use of undeclared type Rating".
In the src/main.rs file I have defined the struct Rating like this:
#[derive(PartialEq, Debug, Clone, Copy)]
struct Rating(i8);
impl Rating {
pub fn new(value: i32) -> Result <Rating, CreationError> {
match value {
v if v > 10 => Err(CreationError::PosOverflow),
v if v < -10 => Err(CreationError::NegOverflow),
_ => Ok(Rating(value as i8)),
}
}
}
My test file tests/test.rs looks like this:
#[cfg(test)]
fn create_new_rating() {
assert_eq!(Rating::new(10).0, 10);
}
In the Rust documentation I only found examples where libs are tested but not binarys. Do I have to use a different syntax in this case?
Your tests folder is for integration tests, and needs to use your crate as though it were an external user. Add use mycratename::Rating to the top of the test.rs, and make Rating public.
If this is a unit test (which this looks like), it is idiomatic to put tests in the same file as the code. This is described in the Book. You would end up with something like:
#[derive(PartialEq, Debug, Clone, Copy)]
struct Rating(i8);
impl Rating {
pub fn new(value: i32) -> Result <Rating, CreationError> {
match value {
v if v > 10 => Err(CreationError::PosOverflow),
v if v < -10 => Err(CreationError::NegOverflow),
_ => Ok(Rating(value as i8)),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn create_new_rating() {
assert_eq!(Rating::new(10).0, 10);
}
}
I have a collection of tests. There are a few tests that need to access a shared resource (external library/API/hardware device). If any of these tests run in parallel, they fail.
I know I could run everything using --test-threads=1 but I find that inconvenient just for a couple of special tests.
Is there any way to keep running all tests in parallel and have an exception for a few? Ideally, I would like to say do not run X, Y, Z at the same time.
Use the serial_test crate. With this crate added, you put in your code:
#[serial]
in front of any test you want run in sequentially.
As mcarton mentions in the comments, you can use a Mutex to prevent multiple pieces of code from running at the same time:
use once_cell::sync::Lazy; // 1.4.0
use std::{sync::Mutex, thread::sleep, time::Duration};
static THE_RESOURCE: Lazy<Mutex<()>> = Lazy::new(Mutex::default);
type TestResult<T = (), E = Box<dyn std::error::Error>> = std::result::Result<T, E>;
#[test]
fn one() -> TestResult {
let _shared = THE_RESOURCE.lock()?;
eprintln!("Starting test one");
sleep(Duration::from_secs(1));
eprintln!("Finishing test one");
Ok(())
}
#[test]
fn two() -> TestResult {
let _shared = THE_RESOURCE.lock()?;
eprintln!("Starting test two");
sleep(Duration::from_secs(1));
eprintln!("Finishing test two");
Ok(())
}
If you run with cargo test -- --nocapture, you can see the difference in behavior:
No lock
running 2 tests
Starting test one
Starting test two
Finishing test two
Finishing test one
test one ... ok
test two ... ok
With lock
running 2 tests
Starting test one
Finishing test one
Starting test two
test one ... ok
Finishing test two
test two ... ok
Ideally, you'd put the external resource itself in the Mutex to make the code represent the fact that it's a singleton and remove the need to remember to lock the otherwise-unused Mutex.
This does have the massive downside that a panic in a test (a.k.a an assert! failure) will cause the Mutex to become poisoned. This will then cause subsequent tests to fail to acquire the lock. If you need to avoid that and you know the locked resource is in a good state (and () should be fine...) you can handle the poisoning:
let _shared = THE_RESOURCE.lock().unwrap_or_else(|e| e.into_inner());
If you need the ability to run a limited set of threads in parallel, you can use a semaphore. Here, I've built a poor one using Condvar with a Mutex:
use std::{
sync::{Condvar, Mutex},
thread::sleep,
time::Duration,
};
#[derive(Debug)]
struct Semaphore {
mutex: Mutex<usize>,
condvar: Condvar,
}
impl Semaphore {
fn new(count: usize) -> Self {
Semaphore {
mutex: Mutex::new(count),
condvar: Condvar::new(),
}
}
fn wait(&self) -> TestResult {
let mut count = self.mutex.lock().map_err(|_| "unable to lock")?;
while *count == 0 {
count = self.condvar.wait(count).map_err(|_| "unable to lock")?;
}
*count -= 1;
Ok(())
}
fn signal(&self) -> TestResult {
let mut count = self.mutex.lock().map_err(|_| "unable to lock")?;
*count += 1;
self.condvar.notify_one();
Ok(())
}
fn guarded(&self, f: impl FnOnce() -> TestResult) -> TestResult {
// Not panic-safe!
self.wait()?;
let x = f();
self.signal()?;
x
}
}
lazy_static! {
static ref THE_COUNT: Semaphore = Semaphore::new(4);
}
THE_COUNT.guarded(|| {
eprintln!("Starting test {}", id);
sleep(Duration::from_secs(1));
eprintln!("Finishing test {}", id);
Ok(())
})
See also:
How to limit the number of test threads in Cargo.toml?
You can always provide your own test harness. You can do that by adding a [[test]] entry to Cargo.toml:
[[test]]
name = "my_test"
# If your test file is not `tests/my_test.rs`, add this key:
#path = "path/to/my_test.rs"
harness = false
In that case, cargo test will compile my_test.rs as a normal executable file. That means you have to provide a main function and add all the "run tests" logic yourself. Yes, this is some work, but at least you can decide everything about running tests yourself.
You can also create two test files:
tests/
- sequential.rs
- parallel.rs
You then would need to run cargo test --test sequential -- --test-threads=1 and cargo test --test parallel. So it doesn't work with a single cargo test, but you don't need to write your own test harness logic.
For testing functions I could select which will run by option -run.
go test -run regex
Very common if we have dozens test cases is put it into array in order not to write function for each of that:
cases := []struct {
arg, expected string
} {
{"%a", "[%a]"},
{"%-a", "[%-a]"},
// and many others
}
for _, c := range cases {
res := myfn(c.arg)
if res != c.expected {
t.Errorf("myfn(%q) should return %q, but it returns %q", c.arg, c.expected, res)
}
}
This work good, but problem is with maintanance. When I add a new testcase, while debugging I want to start just a new test case, but I cannot say something like:
go test -run TestMyFn.onlyThirdCase
Is there any elegant way, how to have many testcases in array together with ability to choose which testcase will run?
With Go 1.6 (and below)
This is not supported directly by the testing package in Go 1.6 and below. You have to implement it yourself.
But it's not that hard. You can use flag package to easily access command line arguments.
Let's see an example. We define an "idx" command line parameter, which if present, only the case at that index will be executed, else all test cases.
Define flag:
var idx = flag.Int("idx", -1, "specify case index to run only")
Parse command line flags (actually, this is not required as go test already calls this, but just to be sure / complete):
func init() {
flag.Parse()
}
Using this parameter:
for i, c := range cases {
if *idx != -1 && *idx != i {
println("Skipping idx", i)
continue
}
if res := myfn(c.arg); res != c.expected {
t.Errorf("myfn(%q) should return %q, but it returns %q", c.arg, c.expected, res)
}
}
Testing it with 3 test cases:
cases := []struct {
arg, expected string
}{
{"%a", "[%a]"},
{"%-a", "[%-a]"},
{"%+a", "[%+a]"},
}
Without idx parameter:
go test
Output:
PASS
ok play 0.172s
Specifying an index:
go test -idx=1
Output:
Skipping idx 0
Skipping idx 2
PASS
ok play 0.203s
Of course you can implement more sophisticated filtering logic, e.g. you can have minidx and maxidx flags to run cases in a range:
var (
minidx = flag.Int("minidx", 0, "min case idx to run")
maxidx = flag.Int("maxidx", -1, "max case idx to run")
)
And the filtering:
if i < *minidx || *maxidx != -1 && i > *maxidx {
println("Skipping idx", i)
continue
}
Using it:
go test -maxidx=1
Output:
Skipping idx 2
PASS
ok play 0.188s
Starting with Go 1.7
Go 1.7 (to be released on August 18, 2016) adds the definition of subtests and sub-benchmarks:
The testing package now supports the definition of tests with subtests and benchmarks with sub-benchmarks. This support makes it easy to write table-driven benchmarks and to create hierarchical tests. It also provides a way to share common setup and tear-down code. See the package documentation for details.
With that, you can do things like:
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
Where the subtests are named "A=1", "A=2", "B=1".
The argument to the -run and -bench command-line flags is a slash-separated list of regular expressions that match each name element in turn. For example:
go test -run Foo # Run top-level tests matching "Foo".
go test -run Foo/A= # Run subtests of Foo matching "A=".
go test -run /A=1 # Run all subtests of a top-level test matching "A=1".
How does this help your case? The names of subtests are string values, which can be generated on-the-fly, e.g.:
for i, c := range cases {
name := fmt.Sprintf("C=%d", i)
t.Run(name, func(t *testing.T) {
if res := myfn(c.arg); res != c.expected {
t.Errorf("myfn(%q) should return %q, but it returns %q",
c.arg, c.expected, res)
}
})
}
To run the case at index 2, you could start it like
go test -run /C=2
or
go test -run TestName/C=2
I wrote a simple code, that work fine with both, although with a bit different command line options. Version for 1.7 is:
// +build go1.7
package plist
import "testing"
func runTest(name string, fn func(t *testing.T), t *testing.T) {
t.Run(name, fn)
}
and 1.6 and older:
// +build !go1.7
package plist
import (
"flag"
"testing"
"runtime"
"strings"
"fmt"
)
func init() {
flag.Parse()
}
var pattern = flag.String("pattern", "", "specify which test(s) should be executed")
var verbose = flag.Bool("verbose", false, "write whether test was done")
// This is a hack, that a bit simulate t.Run available from go1.7
func runTest(name string, fn func(t *testing.T), t *testing.T) {
// obtain name of caller
var pc[10]uintptr
runtime.Callers(2, pc[:])
var fnName = ""
f := runtime.FuncForPC(pc[0])
if f != nil {
fnName = f.Name()
}
names := strings.Split(fnName, ".")
fnName = names[len(names)-1] + "/" + name
if strings.Contains(fnName, *pattern) {
if *verbose {
fmt.Printf("%s is executed\n", fnName)
}
fn(t)
} else {
if *verbose {
fmt.Printf("%s is skipped\n", fnName)
}
}
}