Why do no third party assertion libraries exist for Common Test? - unit-testing

When writing tests I find myself writing all kinds of small little helper functions to make assertions. I searched for an assertion library and didn't find anything. In my tests I often have things like this:
value_in_list(_Value, []) ->
false;
value_in_list(Value, [Item|List]) ->
case Value == Item of
true ->
true;
false ->
value_in_list(Value, List)
end.
test_start_link(_Config) ->
% should return the pid and register the process as my_app
{ok, Pid} = my_app:start_link(),
true = is_pid(Pid),
value_in_list(my_app, registered()).
I end up having to write a whole function to check if my_app is a registered process. It would be much nicer if I could just call something like assertion:value_in_list(my_app, registered()) or assertion:is_registered(my_app).
I come from a Ruby background so I hate having to clutter up my tests with utility functions just to make a few assertions. It would be much cleaner if I could just do:
test_start_link(_Config) ->
% should return the pid and register the process as my_app
{ok, Pid} = my_app:start_link(),
true = is_pid(Pid),
assertion:value_in_list(my_app, registered()).
So my questions are:
Why doesn't an assertion library exist for Common Test?
Would it be possible to build a third party library that would be accessible during all tests?

Some ideas for this:
Move your application startup to the suite's startup section:
init_per_suite(Config) ->
{ok, Pid} = my_app:start_link(),
true = is_pid(Pid),
[{app, Pid} | Config].
Then write your test registration as:
test_registration(Config) ->
Pid = ?config(app, Config),
true = lists:member(Pid, registered()).
There is no need to assert things via explicit assertion functions since they are "built in". Just make a failing match like above and the test process will crash. And hence report that the test case went wrong. Each test case is run in its own process. Which is also why you want to start the application in the init_per_suite/1 callback. Otherwise, my_app will be terminated once your test case has run since you are linked to the process-per-test-case.
So the answer is: assertions are built in. Hence there is less need for an assertion library as such.

On a side note, it's terser and more efficient to write that first block in pattern matching in the signature, rather than to add a case.
value_in_list(_Value, [] ) -> false;
value_in_list( Value, [Value|List] ) -> true;
value_in_list( Value, [ _ |List] ) -> value_in_list(Value, List).
I realize this should probably just be a comment to the original question, but that's murderously difficult to read without monospace and newlines.

You can just use EUnit assertions in Common Test.
-include_lib("eunit/include/eunit.hrl").
And all the regular assertions are available.

I decided to write an Erlang assertion library to help with cases like this. It provides this functionality.

Related

Elixir mock only one function from a file

I have a test case where I need to mock downloading an image. The issue is when I mock this download function, it makes the other functions in that file undefined, but I also need to call the other functions in the test as they originally exist without mocking.
Is there a way to mock only one function from App.Functions in the example below and keep the rest of the functions working the same?
The code looks like this for setting up the mock:
setup_with_mocks(
[
{App.Functions, [], [download_file: fn _url -> :ok end]}
],
context
)
Seems that you are using Mock (https://hexdocs.pm/mock/Mock.html). In that case you can use the passthrough option:
test_with_mock "test_name", App.Functions, [:passthrough], [download_file: fn _url -> :ok end] do
end
I don't know if the option is available also for setup_with_mocks.
More info here: https://github.com/jjh42/mock#passthrough---partial-mocking-of-a-module
Sometimes when you encounter difficulty in mocking functions for testing it can indicate an organizational problem in your code, e.g. a violation of the single-responsibility-principle. Pondering things like this starts to venture into more philosophical territory (which Stackoverflow is not geared towards), but generally it's helpful to isolate your modules in a way that is compatible with testing -- some of the common code/repo organizational patterns fall into place more easily when giving due consideration to facilitating testing.
As already noted, Mock allows the passthrough option.
The Mox package does not have a viable solution to this particular use case -- even its skipping-optional-callbacks option does not really fit the bill.
Another option is to go the more manual route: pass an opt (or read one out of the Application config) that can be overridden at runtime to facilitate testing. This tactic smells to me a bit like Javascript's heavy reliance on passing callback functions, but it can work in a pinch, e.g. something like:
def download(url, opts \\ []) do
http_client = Keyword.get(opts, :client, HTTPoison)
http_client.get(url)
end
# OR
def download(url) do
http_client = Application.get_env(:myapp, :http_client, HTTPoison)
http_client.get(url)
end
Then in your tests:
test "download a file" do
assert {:ok, _} = MyApp.download("http://example", client: HttpClientMock)
end
# OR...
setup do
starting_value = Application.get_env(:myapp, :http_client)
on_exit(fn ->
Application.put_env(:myapp, :http_client, starting_value)
end)
end
test "download a file" do
Application.put_env(:myapp, :http_client, ClientMock)
# ...
end
This has the disadvantage of punting compile-time errors into runtime (which might be a worthwhile tradeoff to achieve test coverage), and this approach can become disorganized, so use with care.
Generally, I've found Mox's approach to rely on behaviours/callbacks to lead to cleaner tests and cleaner code, but your mileage and use-cases may vary.

Passing custom command-line arguments to a Rust test

I have a Rust test which delegates to a C++ test suite using doctest and wants to pass command-line parameters to it. My first attempt was
// in mod ffi
pub fn run_tests(cli_args: &mut [String]) -> bool;
#[test]
fn run_cpp_test_suite() {
let mut cli_args: Vec<String> = env::args().collect();
if !ffi::run_tests(
cli_args.as_mut_slice(),
) {
panic!("C++ test suite reported errors");
}
}
Because cargo test help shows
USAGE:
cargo.exe test [OPTIONS] [TESTNAME] [-- <args>...]
I expected
cargo test -- --test-case="X"
to let run_cpp_test_suite access and pass on the --test-case="X" parameter. But it doesn't; I get error: Unrecognized option: 'test-case' and cargo test -- --help shows it has a fixed set of options
Usage: --help [OPTIONS] [FILTER]
Options:
--include-ignored
Run ignored and not ignored tests
--ignored Run only ignored tests
...
My other idea was to pass the arguments in an environment variable, that is
DOCTEST_ARGS="--test-case='X'" cargo test
but then I need to somehow split that string into arguments (handling at least spaces and quotes correctly) either in Rust or in C++.
There are two pieces of Rust toolchain involved when you run cargo test.
cargo test itself looks for all testable targets in your package or workspace, builds them with cfg(test), and runs those binaries. cargo test processes the arguments to the left of the --, and the arguments to the right are passed to the binary.
Then,
Tests are built with the --test option to rustc which creates an executable with a main function that automatically runs all functions annotated with the #[test] attribute in multiple threads. #[bench] annotated functions will also be run with one iteration to verify that they are functional.
The libtest harness may be disabled by setting harness = false in the target manifest settings, in which case your code will need to provide its own main function to handle running tests.
The “libtest harness” is what rejects your extra arguments. In your case, since you're intending to run an entire other test suite, I believe it would be appropriate to disable the harness.
Move your delegation code to its own file, conventionally located in tests/ in your package directory:
Cargo.toml
src/
lib.rs
...
tests/
cpp_test.rs
Write an explicit target section in your Cargo.toml for it, with harness disabled:
[[test]]
name = "cpp_test"
# path = "tests/cpp_test.rs" # This is automatic; you can use a different path if you really want to.
harness = false
In cpp_test.rs, instead of writing a function with the #[test] attribute, write a normal main function which reads env::args() and calls the C++ tests.
[Disclaimer: I'm familiar with these mechanisms because I've used Criterion benchmarking (which similarly requires disabling the default harness) but I haven't actually written a test with custom arguments the way you're looking for. So, some details might be wrong. Please let me know if anything needs correcting.]
In addition to Kevin Reid's answer, if you don't want to write your own test harness, you can use the shell-words crate to split an environment variable into individual arguments following shell rules:
let args = var ("DOCTEST_ARGS").unwrap_or_else (|_| String::new());
let args = shell_words::split (&args).expect ("failed to parse DOCTEST_ARGS");
Command::new ("cpptest")
.args (args)
.spawn()
.expect ("failed to start subprocess")
.wait()
.expect ("failed to wait for subprocess");

How to tear down testbed resources in Tasty conditionally?

I use Haskell's Tasty framework for testing. When I acquire and clear resources, I do it with withResource Tasty's function:
withResource :: IO a -> (a -> IO ()) -> (IO a -> TestTree) -> TestTree
where a is type of resource. But I want to keep resources if tests fail and to clear them only if tests passed. How is it possible?
Test failures (at least in tasty-hunit) are implemented as exceptions. The purpose of withResource and bracket is to free resources even when there is an exception. If you write straight-line code like this, the resource will be freed if and only if the assertions pass:
testCase "resource management" $ do
a <- allocate
assertBool =<< runTest
cleanUp a
It's a bit hacky, but you could use an AllSucceed dependency and define a dummy test that would clean up your resource but would run only in the case when certain other tests succeed.
A caveat is that such a cleanup test could be filtered out by a pattern.
Alternatively, I think I'd accept a pull request that adds a version of withResource with an additional Outcome argument.

Unit testing side effects in Elixir

I'm writing a unit test for a function that calls out to module as part of a side effect of invoking it:
defmodule HeimdallrWeb.VerifyController do
use HeimdallrWeb, :controller
def verify(conn, _params) do
[forwarded_host | _tail] = get_req_header(conn, "x-forwarded-host")
case is_preview_page?(forwarded_host) do
{:ok, false} ->
conn |> send_resp(200, "")
{:ok, %Heimdallr.Commits.Commit{} = commit} ->
Heimdallr.Commits.touch_commit(commit)
conn |> send_resp(200, "")
{:not_found, _reason} ->
conn |> send_resp(200, "")
end
end
end
The side effect is triggered from the line Heimdallr.Commits.touch_commit(commit).
A few questions about this:
Should my unit test be concerned with testing the effects of the touch_commit method.
If so, should I think about passing in a generic "touch" function to verify method to make it easier to test. This might be difficult due to the nature of Phoenix / Elixirs routing system, I haven't investigated.
If I was using Rails / Ruby / Rspec then I'd set an expectation that a class level method would be called on the HeimdallrCommits module.
My concern and reason for writing the test is that in the future I may accidentally remove the functionality that is touching a commit by deleting or commenting out the line etc.
I would say 1: No. And that is to keep the complexity low in your test. You only want to test (whatever it is you want to test) in a method, the rest should be mocked ignored. What you could do is verify what your method have invoked touch_commit - should be part of a good mocking framework. Those are my 5 cent, sorry to say that I am not familiar with phoenix/elixir so I can't give you any working examples. Like the verify method i Mockito or Moq is what I am thinking of..

What unit testing frameworks are available for F#

I am looking specifically for frameworks that allow me to take advantage of unique features of the language. I am aware of FsUnit. Would you recommend something else, and why?
My own unit testing library, Unquote, takes advantage of F# quotations to allow you to write test assertions as plain, statically checked F# boolean expressions and automatically produces nice step-by-step test failure messages. For example, the following failing xUnit test
[<Fact>]
let ``demo Unquote xUnit support`` () =
test <# ([3; 2; 1; 0] |> List.map ((+) 1)) = [1 + 3..1 + 0] #>
produces the following failure message
Test 'Module.demo Unquote xUnit support' failed:
([3; 2; 1; 0] |> List.map ((+) 1)) = [1 + 3..1 + 0]
[4; 3; 2; 1] = [4..1]
[4; 3; 2; 1] = []
false
C:\File.fs(28,0): at Module.demo Unquote xUnit support()
FsUnit and Unquote have similar missions: to allow you to write tests in an idiomatic way, and to produce informative failure messages. But FsUnit is really just a small wrapper around NUnit Constraints, creating a DSL which hides object construction behind composable function calls. But it comes at a cost: you lose static type checking in your assertions. For example, the following is valid in FsUnit
[<Test>]
let test1 () =
1 |> should not (equal "2")
But with Unquote, you get all of F#'s static type-checking features so the equivalent assertion would not even compile, preventing us from introducing a bug in our test code
[<Test>] //yes, Unquote supports both xUnit and NUnit automatically
let test2 () =
test <# 1 <> "2" #> //simple assertions may be written more concisely, e.g. 1 <>! "2"
// ^^^
//Error 22 This expression was expected to have type int but here has type string
Also, since quotations are able to capture more information at compile time about an assertion expression, failure messages are a lot richer too. For example the failing FsUnit assertion 1 |> should not (equal 1) produces the message
Test 'Test.Swensen.Unquote.VerifyNunitSupport.test1' failed:
Expected: not 1
But was: 1
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\FsUnit.fs(11,0): at FsUnit.should[a,a](FSharpFunc`2 f, a x, Object y)
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\VerifyNunitSupport.fs(29,0): at Test.Swensen.Unquote.VerifyNunitSupport.test1()
Whereas the failing Unquote assertion 1 <>! 1 produces the following failure message (notice the cleaner stack trace too)
Test 'Test.Swensen.Unquote.VerifyNunitSupport.test1' failed:
1 <> 1
false
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\VerifyNunitSupport.fs(29,0): at Test.Swensen.Unquote.VerifyNunitSupport.test1()
And of course from my first example at the beginning of this answer, you can see just how rich and complex Unquote expressions and failure messages can get.
Another major benefit of using plain F# expressions as test assertions over the FsUnit DSL, is that it fits very well with the F# process of developing unit tests. I think a lot of F# developers start by developing and testing code with the assistance of FSI. Hence, it is very easy to go from ad-hoc FSI tests to formal tests. In fact, in addition to special support for xUnit and NUnit (though any exception-based unit testing framework is supported as well), all Unquote operators work within FSI sessions too.
I haven't yet tried Unquote, but I feel I have to mention FsCheck:
http://fscheck.codeplex.com/
This is a port of Haskells QuickCheck library, where rather than specifying what specific tests to carry out, you specify what properties about your function should hold true.
To me, this is a bit harder than using traditional tests, but once you figure out the properties, you'll have more solid tests. Do read the introduction: http://fscheck.codeplex.com/wikipage?title=QuickStart&referringTitle=Home
I'd guess a mix of FsCheck and Unquote would be ideal.
You could try my unit testing library Expecto; it's has some features you might like:
F# syntax throughout, tests as values; write plain F# to generate tests
Use the built-in Expect module, or an external lib like Unquote for assertions
Parallel tests by default
Test your Hopac code or your Async code; Expecto is async throughout
Pluggable logging and metrics via Logary Facade; easily write adapters for build systems, or use the timing mechanism for building an InfluxDB+Grafana dashboard of your tests' execution times
Built in support for BenchmarkDotNet
Build in support for FsCheck; makes it easy to build tests with generated/random data or building invariant-models of your object's/actor's state space
Hello world looks like this
open Expecto
let tests =
test "A simple test" {
let subject = "Hello World"
Expect.equal subject "Hello World" "The strings should equal"
}
[<EntryPoint>]
let main args =
runTestsWithArgs defaultConfig args tests