Create and run a minimal test suite in Haskell using Hunit only - unit-testing

I'm relatively new to Haskell, so apologies in advance if my terminology is not quite correct.
I would like to implement some plain unit test for a very simple project, managed through cabal. I noticed this very similar question, but it didn't really help. This one didn't either (and it mentions tasty, see below).
I think I can accomplish this by using only HUnit - however, I admit I am a bit confused by all other "things" that guides on the net talk about:
I don't quite appreciate the difference between the interfaces exitcode-stdio-1.0 and detailed-0.9
I am not sure about the differences (or mid- and long-terms) implication of using HUnit or Quickcheck or others?
What's the role of tasty that the HUnit guide mentions.
So, I tried to leave all "additional" packages out of the equation and everything else as "default" as much as I could aside and did the following:
$ mkdir example ; mkdir example/test
$ cd example
$ cabal init
Then edited example.cabal and added this section:
Test-Suite test-example
type: exitcode-stdio-1.0
hs-source-dirs: test, app
main-is: Main.hs
build-depends: base >=4.15.1.0,
HUnit
default-language: Haskell2010
Then I created test/Main.hs with this content:
module Main where
import Test.HUnit
tests = TestList [
TestLabel "test2"
(TestCase $ assertBool "Why is this not running," False)
]
main :: IO ()
main = do
runTestTT tests
return ()
Finally, I tried to run the whole lot:
$ cabal configure --enable-tests && cabal build && cabal test
Up to date
Build profile: -w ghc-9.2.4 -O1
In order, the following will be built (use -v for more details):
- example-0.1.0.0 (test:test-example) (additional components to build)
Preprocessing test suite 'test-example' for example-0.1.0.0..
Building test suite 'test-example' for example-0.1.0.0..
Build profile: -w ghc-9.2.4 -O1
In order, the following will be built (use -v for more details):
- example-0.1.0.0 (test:test-example) (ephemeral targets)
Preprocessing test suite 'test-example' for example-0.1.0.0..
Building test suite 'test-example' for example-0.1.0.0..
Running 1 test suites...
Test suite test-example: RUNNING...
Test suite test-example: PASS
Test suite logged to:
/home/jir/workinprogress/haskell/example/dist-newstyle/build/x86_64-linux/ghc-9.2.4/example-0.1.0.0/t/test-example/test/example-0.1.0.0-test-example.log
1 of 1 test suites (1 of 1 test cases) passed.
And the output is not what I expected.
I'm clearly doing something fundamentally wrong, but I don't know what it is.

In order for the exitcode-stdio-1.0 test type to recognize a failed suite, you need to arrange for your test suite's main function to exit with failure in case there are any test failures. Fortunately, there's a runTestTTAndExit function to handle this, so if you replace your main with:
main = runTestTTAndExit tests
it should work fine.

Related

Testing base code with wasm-pack test throws "use of undeclared crate or module" error

Could someone tell me what I'm doing wrong here ? I've written script entries in package.json, which one for tests looks like this
"test:headless": "cross-env RUSTFLAGS=\"-C target-feature=+atomics,+bulk-memory,+mutable-globals,+simd128\" rustup run nightly-2022-04-07 wasm-pack test --headless --chrome -- -Z build-std=panic_abort,std",
altought executed test cannot recognize wasm-rayon namespace which is my base code.
here is code to repo: https://github.com/MalwareX95/wasm-playground
I had to add rlib in crate-type: https://rustwasm.github.io/docs/wasm-pack/tutorials/npm-browser-packages/template-deep-dive/cargo-toml.html
We also specify crate-type = ["rlib"] to ensure that our library can be unit tested with wasm-pack test.

Rust tests fail to even run

I'm writing a project to learn how to use Rust and I'm calling my project future-finance-labs. After writing some basic functions and verifying the app can be built I wanted to include some tests, located in aggregates/mod.rs. [The tests are in the same file as the actual code as per the documentation.] I'm unable to get the tests to run despite following the documentation to the best of my ability. I have tried to build the project using PowerShell as well as Bash. [It fails to run on Fedora Linux as well]
Here is my output on Bash:
~/future-finance-labs$ cargo test -- src/formatters/mod.rs
Finished test [unoptimized + debuginfo] target(s) in 5.98s
Running target/debug/deps/future_finance_labs-16ed066e1ea3b9a1
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Using PowerShell I get the same output with some errors like the following:
error: failed to remove C:\Users\jhale\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\home\jhale\future-finance-labs\target\debug\build\mime_guess-890328c8763afc22\build_script_build-890328c8763afc22.build_script_build.c22di3i8-cgu.0.rcgu.o: The system cannot find the path specified. (os error 3)
After my initial excitement at the prospect of writing a few tests that passed on the first attempt, I quickly realized all the green was indicative; rather, of a failure to even run the tests. I just want to run the unit tests. Running cargo test alone without a separate and file fails as well. Why can't I run any test in this project with my current setup?
It can't find your test because the rust compiler doesn't know about it. You need to add mod aggregates to main.
mod aggregates;
fn main() {
println!("Hello, world!");
}
After you do that, you'll see that your aggregates/mod.rs doesn't compile for many reasons.
And as Mihir was trying to say, you need to use the name of the test, not the name of the file to run a specific test:
cargo test min_works
cargo test aggregates
See also:
How do I “use” or import a local Rust file?
Rust Book: Controlling How Tests Are Run

Compile specific tests into binary

I would like to compile a binary which runs a certain subset of tests. When I run the following, it works:
ubuntu#ubuntu-xenial:/ox$ cargo test hash::vec
Finished dev [unoptimized + debuginfo] target(s) in 0.11 secs
Running target/debug/deps/ox-824a031ff1732165
running 9 tests
test hash::vec::test_hash_entry::test_get_offset_tombstone ... ok
test hash::vec::test_hash_entry::test_get_offset_value ... ok
test hash::vec::test_hash_table::test_delete ... ok
test hash::vec::test_hash_table::test_delete_and_set ... ok
test hash::vec::test_hash_table::test_get_from_hash ... ok
test hash::vec::test_hash_table::test_get_non_existant_from_hash ... ok
test hash::vec::test_hash_table::test_override ... ok
test hash::vec::test_hash_table::test_grow_hash ... ok
test hash::vec::test_hash_table::test_set_after_filled_with_tombstones ... ok
test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 8 filtered out
When I try to run target/debug/deps/ox-824a031ff1732165, it runs all my tests, not just the 9 specified in hash::vec.
I've tried to run cargo rustc --test hash::vec but I get
error: no test target namedhash::vec.cargo rustc -- --testworks, but creates a binary that runs all tests. If I trycargo rustc -- --test hash::vec`, I get:
Compiling ox v0.1.0 (file:///ox)
error: multiple input filenames provided
error: Could not compile `ox`.
cargo rustc -h says that you can pass NAME with the --test flag (--test NAME Build only the specified test target), so I'm wondering what "NAME" is and how to pass it in so I get a binary that only runs the specified 9 tests in hash::vec.
You can't, at least not directly.
In the case of cargo test hash::vec, the hash::vec is just a substring matched against the full path of each test function when the test runner is executed. That is, it has absolutely no impact whatsoever on which tests get compiled, only on which tests run. In fact, this parameter is passed to the test runner itself; Cargo doesn't even interpret it itself.
In the case of --test NAME, NAME is the name of the test source. As in, passing --test blah tells Cargo to build and run the tests in tests/blah.rs. It's the same sort of argument as --bin NAME (for src/bin/NAME.rs) and --example NAME (for examples/NAME.rs).
If you really want to only compile a particular subset of tests, the only way I can think of is to use conditional compilation via features. You'd need a package feature for each subset of tests you want to be able to enable/disable.
This functionality has found its way into Cargo. cargo build now has a parameter
--test [<NAME>] Build only the specified test target
which builds a binary with the specified set of tests only.

When to run unit-tests?

We are currently setting up our build-process within an automated continious integration environment and facing the fundamental question, when to run unit-tests?
One way would be to run the unit test with every build task. So as soon as one unit-test fails, the whole build fails. This has the advantage, that the developer is always forced to keep the unit-tests green, as s/he is otherwise not able to run the application. On the other hand, you are always distracted by fixing the tests during a development process - which might force you to work in very small iterations. Besides that the time to run your application always increases, as you have to wait for the tests every time.
The other way would be, to let the CI-Server run the tests after each new commit and let the developer simply know, that something went wrong. In this way the developer is pretty free, at what time to care for the unit-tests, but also other developers on the same branch might suffer, because they cannot be sure, that all parts of the software work as expected and have to check theirselves, if the failing tests might also influence their work.
So do you have any best-practices or recommendations, which would be a good time to run the tests?
BTW: of course we also run bigger integration-tests, which are handled in a seperate CI-process.
Short answer: run all unit tests on the build server for every commit, on every branch. Assuming your unit tests don't take a really long time to run, there really is no downside to this. As for running all unit tests on every build task locally, that would be a overkill. Developers should have the discipline to decide when to run the tests and when not to.
You want to know as soon as possible when something is wrong so you can fix it promptly. You also want to know all of the tests that fail rather than just the first test that fails. When there are multiple issues it would be a pretty annoying workflow to only fix the one issue and then have to commit, push, and wait for the build to run again to see if there are more issues.
Your build process should have two targets: build and test. test should be the default target when not specifying anything else. The test can't run until the project was build, so the build target is a dependancy of test. So (suppose use use make): make or make test will build and test. make build will just build the project.
Now, if you're using some IDE, you could consider doing the test in some separate way "outside" of the IDE. So, maybe add a third target ide and let the ide build that one. It could then have the build target as normal dependency and as last step spawn a new job in background to do the testing in it's own terminal window, something like (under linux): ( xterm -e ./run-tests & ).
And if you're developing outside of an ide (like I do), then just have a separate terminal run the build & test. As soon as testing starts, you know the build process finished, so you can run you application already, even thou the tests are still running.
Just to demonstrate this (and as a proof of concept for having the test run in background) I just created some trivial test case:
bodo.c:
#include <stdio.h>
int main(int argc, char * argv[]) {
printf("Hallo %s", argc > 1 ? argv[1] : "Welt");
return 0;
}
Makefile:
test: build run-tests
ide: build run-tests-background
run-tests-background:
( xterm -e ./run-tests --wait & )
run-tests:
./run-tests
build: bodo
bodo: bodo.o
bodo.o: bodo.c
.PHONY: run-tests run-tests-background
run-tests:
#! /bin/sh
retval=true
if test "$(./bodo)" != "Hallo Welt"
then
echo "Test failed []"
retval=false
fi
if test "$(./bodo Bodo)" != "Hallo Bodo"
then
echo "Test failed [Bodo]"
retval=false
fi
if test "$(./bodo Fail)" != "Hallo Bodo"
then
echo "Test failed [Fail]"
retval=false
fi
sleep 5 # Simulate some more tests
if $retval
then
echo "All tests suceeded ;)"
else
echo "Some tests failed :("
fi
if test "$1" == "--wait"
then
read -p "Press ENTER to close" enter
fi
if $retval
then
exit 0
else
exit 2
fi
Usage:
Build the project but do not run the tests
make build
Build the project and do run the test in current terminal
make
Build the project and do run the test in separate terminal. Make will return once the build process completed and the test got started
make ide
And two helpers, which are not supposed to be run by hand:
Only run the tests in current terminal (this will fail, if the project wasn't built yet)
make run-tests
Only run the tests in separate terminal (this will fail, if the project wasn't built yet). Make will return immediatelly
make run-tests-background

When using "stack test", my hspec tests output is not colorized

This is an infuriating thing since I have built Hspec-based test suites in which colors all behave normally. But on this project, I cannot get colors to appear when I run all of the test suites at once.
My project.cabal is set up like this:
test-suite unit
type: exitcode-stdio-1.0
main-is: SpecMain.hs
hs-source-dirs: tests/unit
other-modules: WikiSpec
default-language: Haskell2010
ghc-options: -Wall -fno-warn-orphans -threaded
build-depends: base >=4.6
...
test-suite integration
type: exitcode-stdio-1.0
main-is: SpecMain.hs
hs-source-dirs: tests/integration, webapp
other-modules: ApiSpec
default-language: Haskell2010
ghc-options: -Wall -fno-warn-orphans -threaded
build-depends: base >=4.6
...
And then my SpecMain.hs files (identical) contain this:
{-# OPTIONS_GHC -F -pgmF hspec-discover #-}
So, when I run stack test, all of my tests run, but the output is not colorized. If I run stack build --file-watch --test, the tests run, but if there is any failure at all then all of the output is colored red. Finally, if I run stack test weblog:unit or stack test weblog:integration, then the colors end up exactly as they should be. Headers are white, passing tests are green, failing tests are red, and pending tests are yellow.
When I'm doing active development I tend to depend on stack build --file-watch --test, but I really need the colors to be right.
Have any of you any idea what is going on, how I can fix this, or what additional information I need to provide?
By default, hspec will only use colors when the output is shown on a terminal and when the environment variable TERM is not "dumb" (or isn't set). Unless you set an environment variable to "dumb", it's likely that there is something going on with the terminal detection.
Either way, stack build enables you to use arguments for test suites with --test-arguments, and hspec interprets several command line arguments, including --color and --no-color which overwrite the default behaviour. Therefore, you can force the colors:
stack test --file-watch --test-arguments "--color"
Stack uses the behavior you are seeing when you give it more than one package to test at a time. Typically, this happens because you have more than one location listed in the packages stanza of your stack.yaml file.
Recent versions of stack mention the following in the auto-generated stack.yaml file:
# A package marked 'extra-dep: true' will only be built if demanded by a
# non-dependency (i.e. a user package), and its test suites and benchmarks
# will not be run. This is useful for tweaking upstream packages.
If you mark all but one location in the packages stanza as an extra-dep, stack will revert to its single-package behavior when testing, and show your colorized test results as you expect.