I have the following init method in a gen_fsm callback module (my_fsm):
init(Args) when length(Args) =:= 2 ->
% do work with Args here
{ok, my_state, InitialState}.
There is no other init-method in the callback module.
I want to write a unit test using eunit where I assert that calling gen_fsm with an argument-list that does not contain two elements fails:
start_link_without_args_exits_test() ->
?assertException(exit, _, gen_fsm:start_link(my_fsm, [], [])).
However, when the test is run, eunit skips the test with the message:
undefined
*unexpected termination of test process*
::{function_clause,[{my_fsm,init,[[]]},
{gen_fsm,init_it,6},
{proc_lib,init_p_do_apply,3}]}{proc_lib,init_p_do_apply,3}]}
Why is it that the test does not "catch" this error and reports it as a passed test?
The exception actually happens in the gen_fsm process. When the call to the init/1 function is made, gen_fsm catches the result. If it sees something like an error, it will send the error back to the parent (through proc_lib:init_ack/1-2) and then call exit(Error).
Because you use start_link and are not trapping exit, you will never receive the return value -- you'll simply crash instead. For your test case, you'll need to either use start or call process_flag(trap_exit,true) in order to obtain the return value rather than just crashing when the other process goes down.
Then you'll need to switch from
?assertException(exit, _, gen_fsm:start_link(my_fsm, [], [])).
to something like
?assertMatch({error,{function_clause,[{my_fsm,init,_} | _]}},
gen_fsm:start_link(my_fsm, [], []))
In order to have it working well.
Related
I want to verify that a function has never been called using mockito. I am aware of the verifyNever function, but that does not seem to work in my case, or I'm not using it correctly.
My test currently looks like this:
test('when the string is less than 3 characters api is not called', () async {
var databaseService = getAndRegisterDatabaseService();
var model = ViewModel();
model.searchPatient('sk');
// Wait for futures to finish.
await Future.delayed(Duration(seconds: 0));
verifyNever(databaseService.searchPatient('sk'));
});
but I get an error when running the test:
No matching calls (actually, no calls at all).
(If you called `verify(...).called(0);`, please instead use `verifyNever(...);`.)
To me it seems like verifyNever won't work if the function has never been called, just that it has never been called with a specific argument. Is that correct? In that case, is there another way to test what I want?
The verifyNever examples from package:mockito's README.md cover your case:
// Or never called
verifyNever(cat.eatFood(any));
So, in your case, assuming that databaseService is a Mock object, you should be able to use verifyNever(databaseService.searchPatient(any)); to verify that the .searchPatient method is never called, regardless of the arguments.
After trying to write a minimal reproducible example it turns out that the problem was that I read the output incorrectly.
In the following output I read the top row as the title for the failed test, when in fact it is the bottom row that is the name of the failed error than corresponds to the above output.
✓ ViewmodelTest - searchPatient - when the string is less than 3 characters api is not called
No matching calls (actually, no calls at all).
(If you called `verify(...).called(0);`, please instead use `verifyNever(...);`.)
package:test_api fail
_VerifyCall._checkWith
package:mockito/src/mock.dart:672
_makeVerify.<fn>
package:mockito/src/mock.dart:954
main.<fn>.<fn>.<fn>
test/viewmodel_tests/viewmodel_test.dart:144
✖ ViewmodelTest - searchPatient - api is called with correct and cleaned query
When I run the following program with cargo test:
use std::panic;
fn assert_panic_func(f: fn() -> (), msg: String) {
let result = panic::catch_unwind(|| {
f();
});
assert!(result.is_err(), msg);
}
macro_rules! assert_panic {
($test:expr , $msg:tt) => {{
fn wrapper() {
$test;
}
assert_panic_func(wrapper, $msg.to_string())
}};
}
fn main() {
println!("running main()");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn t_1() {
assert_panic!(
panic!("We are forcing a panic"),
"This is asserted within function (raw panic)."
);
// assert_panic!(true, "This is asserted within function (raw true).");
}
}
I get the expected output:
running 1 test
test tests::t_1 ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
If I uncomment the second assert_panic!(...) line, and rerun cargo test, I get the following output:
running 1 test
test tests::t_1 ... FAILED
failures:
---- tests::t_1 stdout ----
thread 'tests::t_1' panicked at 'We are forcing a panic', src/lib.rs:29:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tests::t_1' panicked at 'This is asserted within function (raw true).', src/lib.rs:7:5
failures:
tests::t_1
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
The second panic is legitimate, and that is what I am looking for, but the first panic seems to be being triggered by the line that was not triggering a panic in the first invocation.
What is going on, and how do I fix it?
The stderr output
thread 'tests::t_1' panicked at 'We are forcing a panic', src/main.rs:30:23
is logged independently of whether a panic is caught, the test running just does not show any logged output unless a test fails. To suppress that text entirely, you would need to separately swap out the panic notification hook using std::panic::set_hook.
fn assert_panic_func(f:fn()->(), msg: String) {
let previous_hook = panic::take_hook();
// Override the default hook to avoid logging panic location info.
panic::set_hook(Box::new(|_| {}));
let result = panic::catch_unwind(|| {
f();
});
panic::set_hook(previous_hook);
assert!(result.is_err(), msg );
}
All that said, I second #SCappella's answer about using #[should_panic].
Even if std::panic::catch_unwind catches the panic, any output from that panic will be printed. The reason you don't see anything with the first test (with the commented out second panic) is that cargo test doesn't print output from successful tests.
To see this behavior more clearly, you can use main instead of a test. (playground)
fn main() {
let _ = std::panic::catch_unwind(|| {
panic!("I don't feel so good Mr. catch_unwind");
});
println!("But the execution of main still continues.");
}
Running this gives the output
thread 'main' panicked at 'I don't feel so good Mr. catch_unwind', src/main.rs:3:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
But the execution of main still continues.
Note that panics usually output to stderr, rather than stdout, so it's possible to filter these out.
See also Suppress panic output in Rust when using panic::catch_unwind.
I'm not sure if this is what you're trying to do, but if you want to ensure that a test panics, use the should_panic attribute. For example,
#[test]
#[should_panic]
fn panics() {
panic!("Successfully panicked");
}
At the time I was not aware that unit tests would suppress output messages. I later became aware of the suppression of output messages when researching why println!(...) would not work within unit tests. That this might also be an answer to why panics sometimes display and sometimes do not does make sense.
Nonetheless, it does seem to me to be perverse that panics produce output even when I explicitly tell Rust that I wish to prevent the panic from having any effect, but if that is what Rust does, however perverse that might seem, then one has to live with it.
I was aware of the #[should_panic] attribute, but was not happy with this solution for two reasons:
Firstly, it requires that each test becomes a separate function, whereas I tend to put a number of closely related tests (many of the tests being no more than a single assert!(...) statement) into one function.
Secondly, it would be nice to have a single model to express each test. To my mind, testing whether an expression raises a panic (or fails to raise a panic) is no different from testing whether the result is equal to, or unequal to, some particular value. It makes far more sense to me to create a single model to express both tests, hence my desire to have an assert_panic!(...) macro that behaved analogous to the assert!(...) or assert_eq!(...) macro. It seems that this is simply not an achievable objective within Rust.
Thank you for clearing that up.
I have these two lines at different points of my code:
Message<T1> reply = (Message<T1>) template.sendAndReceive(channel1, message);
Message<T2> reply = (Message<T2>) template.sendAndReceive(channel2, message);
I am doing some unit testing and the test covers both statements. When I try to mock the behaviour, I define some behaviour like this:
Mockito.when(template.sendAndReceive(Mockito.any(MessageChannel.class), Matchers.<GenericMessage<T1>>any() )).thenReturn(instance1);
Mockito.when(template.sendAndReceive(Mockito.any(MessageChannel.class), Matchers.<GenericMessage<T2>>any() )).thenReturn(null);
When I execute the unit tests and do some debugging , the first statement returns null
Do you have any idea what the matchers seem not to work ? and it always takes the last definition of the mock . I am using Mockito 1.1.10
When I execute the unit tests and do some debugging , the first
statement returns null
This happened because you did stub the same method invocation twice with thenReturn(..); and the last one with null won.
The proper way to achieve your goal is to provide a list of consecutive return values to be returned when the method is called:
Mockito.when(template.sendAndReceive(Matchers.any(MessageChannel.class), Matchers.any(GenericMessage.class)))
.thenReturn(instance1, null);
In this case, the returned value for the first invocation will be instance1, and all subsequent invocations will return null. See an example here.
Another option, as Ashley Frieze suggested, would be making template.sendAndReceive return different values based on arguments:
Mockito.when(template.sendAndReceive(Matchers.same(channel1), Matchers.any(GenericMessage.class)))
.thenReturn(instance1);
Mockito.when(template.sendAndReceive(Matchers.same(channel2), Matchers.any(GenericMessage.class)))
.thenReturn(null);
Or even shorter, we can omit second line, because default return value for unstubbed mock method invocations is null:
Mockito.when(template.sendAndReceive(Matchers.same(channel1), Matchers.any(GenericMessage.class)))
.thenReturn(instance1);
Here we are assume that some channel1 and channel2 are in scope of test class and are injected into object under test (at least it seems so from code snippet you provided in the question).
I have an app (epazote) that once starts runs forever but I want to test some values before it blocks/waits until ctrl+c is pressed or is killed.
Here is an small example: http://play.golang.org/p/t0spQRJB36
package main
import (
"fmt"
"os"
"os/signal"
)
type IAddString interface {
AddString(string)
}
type addString struct{}
func (self *addString) AddString(s string) {
fmt.Println(s)
}
func block(a IAddString, s string) {
// test this
a.AddString(s)
// ignore this while testing
block := make(chan os.Signal)
signal.Notify(block, os.Interrupt, os.Kill)
for {
signalType := <-block
switch signalType {
default:
signal.Stop(block)
fmt.Printf("%q signal received.", signalType)
os.Exit(0)
}
}
}
func main() {
a := &addString{}
block(a, "foo")
}
I would like to know if is posible to ignore some parts of the code while testing, or how to test this cases, I have implemented an interface, in this case for testing AddString that helped me to test some parts but have no idea of how to avoid the "block" and test.
Any ideas?
Update: Putting the code inside the loop Addstring in another function works but only for testing that function, but If I want to do a full code coverage, I still need to check/test the blocking part, for example how to test that is properly behaving when receiving ctrl+c or a kill -HUP, I was thinking on maybe creating a fake signal.Notify but don't know how to overwrite imported packages in case that could work.
Yes, it's possible. Put the code that is inside the loop in a separate function, and unit test that function without the loop.
Introduce test delegates into your code.
Extract your loop into a function that takes 2 functions as arguments: onBeginEvent and onEndEvent. The functions signatures shall take:
state that you want to inspect inside the test case
optional: counter of loop number (so you can identify each loop). It is optional because actual delegate implementation can count number of times it was invoked by itself.
In the beginning of your loop you call OnBegingEvent(counter, currentState); than your code does its normal work and in the end you call OnEndEvent(counter, currentState); Presumably your code has changed currentState.
In production you could use an empty implementation of the function delegates or implement nil check in your loop.
You can use this model to put as many checks of your processing algorithms as you want. Let's say you have 5 checks. Now you look back at it and realize that's becoming too hard. You create an Interface that defines your callback functions. These callback functions are a powerful method of changing your service behavior. You step back one more time and realize that interface is actually your "service's policy" ;)
Once you take that route you will want to stop your infinite loop somehow. If you want a tight control within a test case you could take a 3rd function delegate that returns true if it is time to quit from the loop. Shared variable is an option to control quit condition.
This is certainly a higher level of testing than unit testing and it is necessary in complex services.
When writing tests I find myself writing all kinds of small little helper functions to make assertions. I searched for an assertion library and didn't find anything. In my tests I often have things like this:
value_in_list(_Value, []) ->
false;
value_in_list(Value, [Item|List]) ->
case Value == Item of
true ->
true;
false ->
value_in_list(Value, List)
end.
test_start_link(_Config) ->
% should return the pid and register the process as my_app
{ok, Pid} = my_app:start_link(),
true = is_pid(Pid),
value_in_list(my_app, registered()).
I end up having to write a whole function to check if my_app is a registered process. It would be much nicer if I could just call something like assertion:value_in_list(my_app, registered()) or assertion:is_registered(my_app).
I come from a Ruby background so I hate having to clutter up my tests with utility functions just to make a few assertions. It would be much cleaner if I could just do:
test_start_link(_Config) ->
% should return the pid and register the process as my_app
{ok, Pid} = my_app:start_link(),
true = is_pid(Pid),
assertion:value_in_list(my_app, registered()).
So my questions are:
Why doesn't an assertion library exist for Common Test?
Would it be possible to build a third party library that would be accessible during all tests?
Some ideas for this:
Move your application startup to the suite's startup section:
init_per_suite(Config) ->
{ok, Pid} = my_app:start_link(),
true = is_pid(Pid),
[{app, Pid} | Config].
Then write your test registration as:
test_registration(Config) ->
Pid = ?config(app, Config),
true = lists:member(Pid, registered()).
There is no need to assert things via explicit assertion functions since they are "built in". Just make a failing match like above and the test process will crash. And hence report that the test case went wrong. Each test case is run in its own process. Which is also why you want to start the application in the init_per_suite/1 callback. Otherwise, my_app will be terminated once your test case has run since you are linked to the process-per-test-case.
So the answer is: assertions are built in. Hence there is less need for an assertion library as such.
On a side note, it's terser and more efficient to write that first block in pattern matching in the signature, rather than to add a case.
value_in_list(_Value, [] ) -> false;
value_in_list( Value, [Value|List] ) -> true;
value_in_list( Value, [ _ |List] ) -> value_in_list(Value, List).
I realize this should probably just be a comment to the original question, but that's murderously difficult to read without monospace and newlines.
You can just use EUnit assertions in Common Test.
-include_lib("eunit/include/eunit.hrl").
And all the regular assertions are available.
I decided to write an Erlang assertion library to help with cases like this. It provides this functionality.