Haskell unit testing - unit-testing

I'm new to haskell and working on unit testing, however I find the ecosystem to be very confusing. I'm confused as to the relationship between HTF and HUnit.
In some examples I see you set up test cases, export them in an tests list, and then run in ghci with runTestsTT (like this HUnit example).
In other examples, you create a test runner tied into the cabal file that uses some preprocessor magic to find your tests like in this git example. Also it seems that HTF tests need to be prefixed with test_ or they aren't run? I had a hard time finding any documentation on that, I just noticed the pattern that everyone had.
Anyways, can someone help sort this out for me? What is considered the standard way of doing things in Haskell? What are the best practices? What is the easiest to set up and maintain?

Generally, any significant Haskell project is run with Cabal. This takes care of building, distribution, documentation (with the help of haddock), and testing.
The standard approach is to put your tests in the test directory and then set up a test suite in your .cabal file. This is detailed in the user manual. Here's what the test suite for one of my projects looks like
Test-Suite test-melody
type: exitcode-stdio-1.0
main-is: Main.hs
hs-source-dirs: test
build-depends: base >=4.6 && <4.7,
test-framework,
test-framework-hunit,
HUnit,
containers == 0.5.*
Then in the file test/Main.hs
import Test.HUnit
import Test.Framework
import Test.Framework.Providers.HUnit
import Data.Monoid
import Control.Monad
import Utils
pushTest :: Assertion
pushTest = [NumLit 1] ^? push (NumLit 1)
pushPopTest :: Assertion
pushPopTest = [] ^? (push (NumLit 0) >> void pop)
main :: IO ()
main = defaultMainWithOpts
[testCase "push" pushTest
,testCase "push-pop" pushPopTest]
mempty
Where Utils defines some nicer interfaces over HUnit.
For lighter-weight testing, I strongly recommend you use QuickCheck. It lets you write short properties and test them over a series of random inputs. For example:
-- Tests.hs
import Test.QuickCheck
prop_reverseReverse :: [Int] -> Bool
prop_reverseReverse xs = reverse (reverse xs) == xs
And then
$ ghci Tests.hs
> import Test.QuickCheck
> quickCheck prop_reverseReverse
.... Passed Tests (100/100)

I'm also newbie haskeller and I've found this introduction really helpful: "Getting started with HUnit". To summarize, I'll put here simple testing example of HUnit usage without .cabal project file:
Let's assume that we have module SafePrelude.hs:
module SafePrelude where
safeHead :: [a] -> Maybe a
safeHead [] = Nothing
safeHead (x:_) = Just x
we can put tests into TestSafePrelude.hs as follows:
module TestSafePrelude where
import Test.HUnit
import SafePrelude
testSafeHeadForEmptyList :: Test
testSafeHeadForEmptyList =
TestCase $ assertEqual "Should return Nothing for empty list"
Nothing (safeHead ([]::[Int]))
testSafeHeadForNonEmptyList :: Test
testSafeHeadForNonEmptyList =
TestCase $ assertEqual "Should return (Just head) for non empty list" (Just 1)
(safeHead ([1]::[Int]))
main :: IO Counts
main = runTestTT $ TestList [testSafeHeadForEmptyList, testSafeHeadForNonEmptyList]
Now it's easy to run tests using ghc:
runghc TestSafePrelude.hs
or hugs - in this case TestSafePrelude.hs has to be renamed to Main.hs (as far as I'm familiar with hugs) (don't forget to change module header too):
runhugs Main.hs
or any other haskell compiler ;-)
Of course there is more then that in HUnit, so I really recommend to read suggested tutorial and library User's Guide.

You've had answers to most of your questions, but you also asked about HTF, and how that works.
HTF is a framework that is designed for both unit testing -- it is backwards compatible with HUnit (it integrates and wraps it to provide extra functions) -- and property-based testing -- it integrates with quickcheck. It uses a preprocessor to locate tests so that you don't have to manually build a list. The preprocessor is added to your test source files using a pragma:
{-# OPTIONS_GHC -F -pgmF htfpp #-}
(alternatively, I guess you could add the same options to your ghc-options property in your cabal file, but I've never tried this so don't know if it is useful or not).
The preprocessor scans your module for top-level functions named test_xxxx or prop_xxxx and adds them to a list of tests for the module. You can either use this list directly by putting a main function in the module and running them (main = htfMain htf_thisModuleTests) or export them from the module, and have a main test program for multiple modules, which imports the modules with tests and runs all of them:
import {-# HTF_TESTS #-} ModuleA
import {-# HTF_TESTS #-} ModuleB
main :: IO ()
main = htfMain htf_importedTests
This program can be integrated with cabal using the technique described by #jozefg, or loaded into ghci and run interactively (although not on Windows - see https://github.com/skogsbaer/HTF/issues/60 for details).
Tasty is another alternative that provides a way of integrating different kinds of tests. It doesn't have a preprocessor like HTF, but has a module that performs similar functions using Template Haskell. Like HTF, it also relies on naming convention to identify your tests (in this case, case_xxxx rather than test_xxxx). In addition to HUnit and QuickCheck tests, it also has modules for handling a number of other test types.

Related

Is there a programmatic approach to check for unused unit test files in a crate?

The documentation for Rust tests indicates that unit tests are run when the module that defines them is in scope, with the test configuration flag activated:
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
let result = 2 + 2;
assert_eq!(result, 4);
}
}
In a large project, however, it is frequent to confine a test module to a different file, so that the test module is pointed to differently:
#[cfg(test)]
#[path = "unit_tests/tests.rs"]
mod tests;
With this setup, in a large project, it may happen that someone deletes the reference to src/unit_tests/tests.rs (i.e. the code block above), without deleting the test file itself. This usually indicates a bug:
either the removal was intentional, and the file src/unit_tests/tests.rs should not remain in the repository,
or it wasn't, in which case the tests in src/unit_tests/tests.rs are meant to be run and aren't, and CI should be vocal about that.
I'd like to add a verification that there are no such unmoored test files in my crate, to the CI of my project. Is there any tooling in the rust ecosystem that could help me detect those unlinked test files programmatically?
(integration tests are automatically compiled if found in the tests/ repository of the crate, but IIUC, that does not apply to unit tests)
There is an issue asking for cargo to do this check, cargo (publish) should complain about unused source files, and from that found the cargo-modules crate can do this. For example:
//! src/lib.rs
#[cfg(test)]
mod tests;
//! src/tests.rs
#[test]
fn it_works() {
let result = 2 + 2;
assert_eq!(result, 4);
}
> cargo modules generate tree --lib --with-orphans --cfg-test
crate mycrate
└── mod tests: pub(crate) #[cfg(test)]
However, if I remove the #[cfg(test)] mod tests; the output would look like this:
> cargo modules generate tree --lib --with-orphans --cfg-test
crate mycrate
└── mod tests: orphan
You would be able to grep for "orphan" and fail your check based on that.
I will say though that this tool seems very slow from my small tests. The discussion in the linked issue indicate that a better way would have to be used if implemented in cargo.
You will also have to keep in mind any other conditional compilation may yield false positives (i.e. you legitimately include or exclude certain modules based on architecture, OS, or features). There is an --all-features flag that could help with some of that.
There is a very old rustc issue on the subject, which remains open.
However one of the commenters provides a hack:
One possible way to implement this (including as a third-party tool) is to parse the .d files rustc emits for cargo, and look for any .rs files that aren't mentioned.
rg --no-filename '^[^/].*\.rs:$' target/debug/deps/*.d | sed 's/:$//' | sort -u | diff - <(fd '\.rs$' | sort -u)
Apparently the .d files are
makefile-compatible dependency lists
and so I assume they should list the relationship between all rust files in the project. Which is why if one is missing, it's not seen by cargo / rustc.

Passing custom command-line arguments to a Rust test

I have a Rust test which delegates to a C++ test suite using doctest and wants to pass command-line parameters to it. My first attempt was
// in mod ffi
pub fn run_tests(cli_args: &mut [String]) -> bool;
#[test]
fn run_cpp_test_suite() {
let mut cli_args: Vec<String> = env::args().collect();
if !ffi::run_tests(
cli_args.as_mut_slice(),
) {
panic!("C++ test suite reported errors");
}
}
Because cargo test help shows
USAGE:
cargo.exe test [OPTIONS] [TESTNAME] [-- <args>...]
I expected
cargo test -- --test-case="X"
to let run_cpp_test_suite access and pass on the --test-case="X" parameter. But it doesn't; I get error: Unrecognized option: 'test-case' and cargo test -- --help shows it has a fixed set of options
Usage: --help [OPTIONS] [FILTER]
Options:
--include-ignored
Run ignored and not ignored tests
--ignored Run only ignored tests
...
My other idea was to pass the arguments in an environment variable, that is
DOCTEST_ARGS="--test-case='X'" cargo test
but then I need to somehow split that string into arguments (handling at least spaces and quotes correctly) either in Rust or in C++.
There are two pieces of Rust toolchain involved when you run cargo test.
cargo test itself looks for all testable targets in your package or workspace, builds them with cfg(test), and runs those binaries. cargo test processes the arguments to the left of the --, and the arguments to the right are passed to the binary.
Then,
Tests are built with the --test option to rustc which creates an executable with a main function that automatically runs all functions annotated with the #[test] attribute in multiple threads. #[bench] annotated functions will also be run with one iteration to verify that they are functional.
The libtest harness may be disabled by setting harness = false in the target manifest settings, in which case your code will need to provide its own main function to handle running tests.
The “libtest harness” is what rejects your extra arguments. In your case, since you're intending to run an entire other test suite, I believe it would be appropriate to disable the harness.
Move your delegation code to its own file, conventionally located in tests/ in your package directory:
Cargo.toml
src/
lib.rs
...
tests/
cpp_test.rs
Write an explicit target section in your Cargo.toml for it, with harness disabled:
[[test]]
name = "cpp_test"
# path = "tests/cpp_test.rs" # This is automatic; you can use a different path if you really want to.
harness = false
In cpp_test.rs, instead of writing a function with the #[test] attribute, write a normal main function which reads env::args() and calls the C++ tests.
[Disclaimer: I'm familiar with these mechanisms because I've used Criterion benchmarking (which similarly requires disabling the default harness) but I haven't actually written a test with custom arguments the way you're looking for. So, some details might be wrong. Please let me know if anything needs correcting.]
In addition to Kevin Reid's answer, if you don't want to write your own test harness, you can use the shell-words crate to split an environment variable into individual arguments following shell rules:
let args = var ("DOCTEST_ARGS").unwrap_or_else (|_| String::new());
let args = shell_words::split (&args).expect ("failed to parse DOCTEST_ARGS");
Command::new ("cpptest")
.args (args)
.spawn()
.expect ("failed to start subprocess")
.wait()
.expect ("failed to wait for subprocess");

The correct way to write unit tests for a module in OCaml

I have a given interface specification in the module.mli file. I have to write its implementation in the module.ml file.
module.mli provides an abstract type
type abstract_type
I'm using OUnit to create the tests. I need to use the type's implementation in them. (for example to compare the values) One solution would be to extend the interface to contain additional functions used in the tests.
But is it possible to do such a thing without modifying the interface?
The only way to expose tests without touching the module interface would be to register the tests with some global container. If you have a module called Tests that provides a function register, your module.ml would contain something like this:
let some_test = ...
let () = Tests.register some_test
I don't recommend this approach because the Tests module loses control over what tests it's going to run.
Instead I recommend exporting the tests, i.e. adding them to module.mli.
Note that without depending on OUnit, you can export tests of the following type that anyone can run. Our tests look like this:
let test_cool_feature () =
...
assert ...;
...
assert ...;
true
let test_super_feature () =
...
a = b
let tests = [
"cool feature", test_cool_feature;
"super feature", test_super_feature;
]
The interface is:
...
(**/**)
(* begin section ignored by ocamldoc *)
val test_cool_feature : unit -> bool
val test_super_feature : unit -> bool
val tests : (string * (unit -> bool)) list

Selective running of tests in HUnit

Test.HUnit provides a big red button to run a test:
runTestTT :: Test -> IO Counts
As there is a need to structure large test suites, Test is not a single test but is actually a labelled rose tree with Assertion in leaves:
data Test
= TestCase Assertion | TestList [Test] | TestLabel String Test
-- Defined in `Test.HUnit.Base'
It's not abstract so it's possible to process it. One particularly useful processing is extraction of subtrees by paths:
byPath = flip $ foldl f where
f (TestList l) = (l !!)
f (TestLabel _ t) = const t
f t = const t
So for example I can run a single subsuite runTestTT $ byPath [1] tests or a particular test runTestTT $ byPath [1,7,3] tests identified by test path instead of waiting for whole suite.
One disadvantage of the homegrown tool is that test paths are not preserved (shortened).
Is there such processing helper tool already on Hackage?
The closest to your needs seem to be the libraries and programs that abstract over HUnit, Quickcheck and other tests, and have their own test name grouping and management infrastructure, e.g. test-framework. It provides you with a main function that takes command line arguments, including one that allows you to specify a test or test group to run (by globbing on the name).

What unit testing frameworks are available for F#

I am looking specifically for frameworks that allow me to take advantage of unique features of the language. I am aware of FsUnit. Would you recommend something else, and why?
My own unit testing library, Unquote, takes advantage of F# quotations to allow you to write test assertions as plain, statically checked F# boolean expressions and automatically produces nice step-by-step test failure messages. For example, the following failing xUnit test
[<Fact>]
let ``demo Unquote xUnit support`` () =
test <# ([3; 2; 1; 0] |> List.map ((+) 1)) = [1 + 3..1 + 0] #>
produces the following failure message
Test 'Module.demo Unquote xUnit support' failed:
([3; 2; 1; 0] |> List.map ((+) 1)) = [1 + 3..1 + 0]
[4; 3; 2; 1] = [4..1]
[4; 3; 2; 1] = []
false
C:\File.fs(28,0): at Module.demo Unquote xUnit support()
FsUnit and Unquote have similar missions: to allow you to write tests in an idiomatic way, and to produce informative failure messages. But FsUnit is really just a small wrapper around NUnit Constraints, creating a DSL which hides object construction behind composable function calls. But it comes at a cost: you lose static type checking in your assertions. For example, the following is valid in FsUnit
[<Test>]
let test1 () =
1 |> should not (equal "2")
But with Unquote, you get all of F#'s static type-checking features so the equivalent assertion would not even compile, preventing us from introducing a bug in our test code
[<Test>] //yes, Unquote supports both xUnit and NUnit automatically
let test2 () =
test <# 1 <> "2" #> //simple assertions may be written more concisely, e.g. 1 <>! "2"
// ^^^
//Error 22 This expression was expected to have type int but here has type string
Also, since quotations are able to capture more information at compile time about an assertion expression, failure messages are a lot richer too. For example the failing FsUnit assertion 1 |> should not (equal 1) produces the message
Test 'Test.Swensen.Unquote.VerifyNunitSupport.test1' failed:
Expected: not 1
But was: 1
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\FsUnit.fs(11,0): at FsUnit.should[a,a](FSharpFunc`2 f, a x, Object y)
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\VerifyNunitSupport.fs(29,0): at Test.Swensen.Unquote.VerifyNunitSupport.test1()
Whereas the failing Unquote assertion 1 <>! 1 produces the following failure message (notice the cleaner stack trace too)
Test 'Test.Swensen.Unquote.VerifyNunitSupport.test1' failed:
1 <> 1
false
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\VerifyNunitSupport.fs(29,0): at Test.Swensen.Unquote.VerifyNunitSupport.test1()
And of course from my first example at the beginning of this answer, you can see just how rich and complex Unquote expressions and failure messages can get.
Another major benefit of using plain F# expressions as test assertions over the FsUnit DSL, is that it fits very well with the F# process of developing unit tests. I think a lot of F# developers start by developing and testing code with the assistance of FSI. Hence, it is very easy to go from ad-hoc FSI tests to formal tests. In fact, in addition to special support for xUnit and NUnit (though any exception-based unit testing framework is supported as well), all Unquote operators work within FSI sessions too.
I haven't yet tried Unquote, but I feel I have to mention FsCheck:
http://fscheck.codeplex.com/
This is a port of Haskells QuickCheck library, where rather than specifying what specific tests to carry out, you specify what properties about your function should hold true.
To me, this is a bit harder than using traditional tests, but once you figure out the properties, you'll have more solid tests. Do read the introduction: http://fscheck.codeplex.com/wikipage?title=QuickStart&referringTitle=Home
I'd guess a mix of FsCheck and Unquote would be ideal.
You could try my unit testing library Expecto; it's has some features you might like:
F# syntax throughout, tests as values; write plain F# to generate tests
Use the built-in Expect module, or an external lib like Unquote for assertions
Parallel tests by default
Test your Hopac code or your Async code; Expecto is async throughout
Pluggable logging and metrics via Logary Facade; easily write adapters for build systems, or use the timing mechanism for building an InfluxDB+Grafana dashboard of your tests' execution times
Built in support for BenchmarkDotNet
Build in support for FsCheck; makes it easy to build tests with generated/random data or building invariant-models of your object's/actor's state space
Hello world looks like this
open Expecto
let tests =
test "A simple test" {
let subject = "Hello World"
Expect.equal subject "Hello World" "The strings should equal"
}
[<EntryPoint>]
let main args =
runTestsWithArgs defaultConfig args tests