func Test_something(t *testing.T) {
// TEST CASE1: pass an array
// some logic here
// TEST CASE2: pass an EMPTY array --> this will cause test to fail
// some logic here
// TEST CASE3: pass something else
// some logic here
I am writing some unit tests but I am not sure if it's possible to run a test Test_something that has several test cases without stopping the execution of other test cases if one fails. Or does it even make sense?
In console I would like to see something like this.
TESTCASE1: SUCCESS <message>
TESTCASE2: FAIL <message>
TESTCASE3: SUCCESS <message>
At the moment I get something like this:
TESTCASE1: SUCCESS <message>
TESTCASE2: FAIL <message>
It will naturally stop executing after TESTCASE2 fails.
You can use subtest with the help of the testing.T.Run function. It allows to gather several test cases together and have separate status for each of them.
func TestSomething(t *testing.T) {
t.Run("first test case", func(t *testing.T) {
// implement your first test case here
})
t.Run("second test case", func(t *testing.T) {
// implement your second test case here
}
}
with t *testing.T you may call:
t.Errorf(...): it will not stop next tests.
t.Fatalf(...): it will stop next tests.
See official doc.
Related
Within Rusts build-tool cargo, I can define a function for a test-case simply this way:
#[test]
fn test_something() {
assert_eq!(true, true);
}
Running with cargo test outputs:
Running unittests src/lib.rs (target/debug/deps/example-14178b7a1a1c9c8c)
running 1 test
test test::test_something ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
So far, so good. Now I have the special case, that test cases in my project are defined in files. They can be executed using a special testcase() function that performs all steps to run the test, and do specific assert_eq!() calls.
Now I want to achieve the following:
Read all files from a tests/ folder
Run testcase() on every file
Have every file as a single test case in cargo test
Latter one is the problem. This is the function (as test) to run all tests from the folder using the testcase()-function.
#[test]
// Run all tests in the tests/-folder
fn tests() {
let cases = std::fs::read_dir("tests").unwrap();
for case in cases {
testcase(case.unwrap().path().to_str().unwrap());
}
}
A run of cargo test should print the following, given the testcases a.test, b.test and c.test:
Running unittests src/lib.rs (target/debug/deps/example-14178b7a1a1c9c8c)
running 3 test
test test::tests::tests/a.test ... ok
test test::tests::tests/b.test ... ok
test test::tests::tests/c.test ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Is this somehow possible?
With Rust's built-in test harness, there is no way to declare a single test case other than with the #[test] attribute. In order to get the result you want, you would need to write a proc-macro which reads your directory of tests and generates a #[test] function for each one.
Instead, it will probably be more practical to write a test which uses your own harness code. In your Cargo.toml, define a test target with it disabled:
[[test]]
name = "lang_tests"
harness = false
Then in tests/lang_tests.rs (the tests/ directory goes next to src/ and is already known to Cargo), write a program with an ordinary main() that runs tests:
fn main() {
let cases = std::fs::read_dir("tests").unwrap();
let mut passed = 0;
let mut failed = 0;
for case in cases {
if testcase(case.unwrap().path().to_str().unwrap()) {
eprintln!("...");
passed += 1;
} else {
eprintln!("...");
failed += 1;
}
}
eprintln!("{passed} passed, {failed} failed");
if failed > 0 {
std::process::exit(1);
}
}
Of course there's a bunch of further details to implement here — but I hope this illustrates how to write a custom test harness that does the thing you want to do. Again, you cannot do it with the default test harness except by using macro-generated code, which is probably not worthwhile in this case.
I have a unit test, written with JUnit 5 (Jupiter), that is failing. I do not currently have time to fix the problem, so I would like to mark the test as an expected failure. Is there a way to do that?
I see #Disable which causes the test to not be run. I would like the test to still run (and ideally fail the build if it starts to work), so that I remember that the test is there.
Is there such an annotation in Junit 5? I could use assertThrows to catch the error, but I would like the build output to indicate that this is not a totally normal test.
You can disable the failing test with the #Disabled annotation. You can then add another test that asserts the first one does indeed fail:
#Test
#Disabled
void fixMe() {
Assertions.fail();
}
#Test
void fixMeShouldFail() {
assertThrows(AssertionError.class, this::fixMe);
}
Is there legitimate way to write down a test case for which I intent to write full test function later on? As like pending tests of mochajs?
The package docs describe such example with testing.(*T).Skip:
Tests and benchmarks may be skipped if not applicable with a call to the Skip method of *T and *B:
func TestTimeConsuming(t *testing.T) {
if testing.Short() {
t.Skip("skipping test in short mode.")
}
...
}
The message you provided for Skip will be printed if you launch go test with a -v flag (in this example you'll also need to provide -short flag to see the skip message).
Is there a way to execute test cases in GoLang in a pre-defined order.
P.S: I am writing test cases for life cycle of a event. So I have different api's for all the CURD operations. I want to run these test cases in a particular order as only if an event is created it can be destroyed.
Also can I get some value from one test case and pass it as input to another. (example:- To test the delete event api, I need a event_id which i get when I call create_event test case)
I am new to GoLang, can someone please guide me through.
Thanks in advance
The only way to do it is to encapsulate all your tests into one test function, that calls sub-functions in the right order and with the right context, and pass the testing.T pointer to each so they can fail. The down-side is that they will all appear as one test. But in fact that is the case - tests are stateless as far as the testing framework is concerned, and each function is a separate test case.
Note that although the tests may run in the order they are written in, I found no documentation stating that this is actually a contract of some sort. So even though you can write them in order and keep the state as external global variables - that's not recommended.
The only flexibility the framework gives you since go 1.4 is the TestMain method that lets you run before/after steps, or setup/teardown:
func TestMain(m *testing.M) {
if err := setUp(); err != nil {
panic(err)
}
rc := m.Run()
tearDown()
os.Exit(rc)
}
But that won't give you what you want. The only way to do that safely is to do something like:
// this is the whole stateful sequence of tests - to the testing framework it's just one case
func TestWrapper(t *testing.T) {
// let's say you pass context as some containing struct
ctx := new(context)
test1(t, ctx)
test2(t, ctx)
...
}
// this holds context between methods
type context struct {
eventId string
}
func test1(t *testing.T, c *context) {
// do your thing, and you can manipulate the context
c.eventId = "something"
}
func test2(t *testing.T, c *context) {
// do your thing, and you can manipulate the context
doSomethingWith(c.eventId)
}
I have unit tests for most of our code. But I cannot figure out how to generate unit tests coverage for certain code in main() in main package.
The main function is pretty simple. It is basically a select block. It reads flags, then either call another function/execute something, or simply print help on screen. However, if commandline options are not set correctly, it will exit with various error codes. Hence, the need for sub-process testing.
I tried sub-process testing technique but modified code so that it include flag for coverage:
cmd := exec.Command(os.Args[0], "-test.run=TestMain -test.coverprofile=/vagrant/ucover/coverage2.out")
Here is original code: https://talks.golang.org/2014/testing.slide#23
Explanation of above slide: http://youtu.be/ndmB0bj7eyw?t=47m16s
But it doesn't generate cover profile. I haven't been able to figure out why not. It does generate cover profile for main process executing tests, but any code executed in sub-process, of course, is not marked as executed.
I try to achieve as much code coverage as possible. I am not sure if I am missing something or if there is an easier way to do this. Or if it is just not possible.
Any help is appreciated.
Thanks
Amer
I went with another approach which didn't involve refactoring main(): see this commit:
I use a global (unexported) variable:
var args []string
And then in main(), I use os.Args unless the private var args was set:
a := os.Args[1:]
if args != nil {
a = args
}
flag.CommandLine.Parse(a)
In my test, I can set the parameters I want:
args = []string{"-v", "-audit", "_tests/p1/conf/gitolite.conf"}
main()
And I still achieve a 100% code coverage, even over main().
I would factor the logic that needs to be tested out of main():
func main() {
start(os.Args)
}
func start(args []string) {
// old main() logic
}
This way you can unit-test start() without mutating os.Args.
Using #VonC solution with Go 1.11, I found I had to reset flag.CommandLine on each test redefining the flags, to avoid a "flag redefined" panic.:
for _, check := range checks {
t.Run("flagging " + check.arg, func(t *testing.T) {
flag.CommandLine = flag.NewFlagSet(cmd, flag.ContinueOnError)
args = []string{check.arg}
main()
})
}