Rust unit tests organization in sub-modules - unit-testing

I want to better organize my unit tests inside a rust file.
This file contains plenty of functions and their variations, and i would like to organize "submodules" for testing, but i don't know if it makes sense in Rust with cargo.
I want to do something like
// fn etc, etc
#[cfg(test)]
mod tests{
use super::*;
mod test_a {
use super::*;
#[test]
fn a_panics(){ }
// etc
}
mod test_b {
use super::*;
#[test]
fn b_panics(){}
// etc
}
}
But I'm unsure how would i have to go about this.
Should every module be annotated with #[cfg(test)] or just the top tests one?
I suppose it works because in vscode the codelens shows the >Run Tests | Debug lens on tests and each submodule, but still I'm unsure about the module annotations or if this is good practice in rust, since i couldn't find any example of this online.

#[cfg(test)] just tell the compiler to not compile the module (and everything inside) if we're not testing. This is done solely to save compilation time.
Since it works for everything inside too, there's no need to mark nested modules as #[cfg(test)].

Related

export a type under #[cfg(test)] in crate A so that it can be used for unit tests in crate B

I have some crates - foo, bar, and util in my workspace. I have a type ForTest in my util crate. This type is used for unit tests in both foo and bar.
I defined ForTest as:
#[cfg(test)]
mod for_test {
pub struct ForTest {
...
}
}
and exported it as:
#[cfg(test)]
pub use for_test::ForTest;
foo and bar use ForTest like this:
#[cfg(test)]
mod tests {
use util::ForTest;
...
}
Doing this does not currently work.
I could have a feature in util crate but it won't stop normal code in foo and bar from using ForTest (unless I can enable the feature only for tests? is that possible?). I could just be careful in using it but I would like to exhaust all other options first.
I found this thread on URLO but 1) it is very old (so could be outdated) 2) it does not provide a solution, hence I am asking here.
#[cfg(test)] is enabled only when unit-testing this crate itself.
You can either use #[cfg(debug_assertions)] as an approximation (but this is also on on debug builds).

Testing a Kotest test

I am not sure if this is possible or if there's a better architecture for this. I wrote a function that does some tests:
fun validate(a: Any?, b: Any?){
a shouldBe b
}
My function is obviously more complex than that, but now I would like to test the test itself. For example, I want to pass a=1 and b=null and make sure it fails. Is there a way to test for it to fail or succeed using Kotest?
If not, what is a proper way to architect this?
I am trying to rely on this test often and in many places of my service, so I would like it to be as reliable as possible.
Short answer: use shouldFail { ... }.
This is possible because failing tests in Kotest (and many other frameworks) will throw an AssertionError. You can check for that exception to see whether the test failed. So to check for a failed assertion, you can write:
shouldThrow<AssertionError> { /* your failing test here */ }
As you'd probably expect, shouldThrow will catch the exception so that it doesn't propagate. So in effect it will turn a failing test into a passing test, and vice versa.
To make it a bit more convenient and readable, you could write a couple of extension functions, like this:
fun shouldFail(block: () -> Unit) = shouldThrow<AssertionError>(block)
fun shouldNotFail(block: () -> Unit) = shouldNotThrow<AssertionError>(block)
In fact Kotest includes the shouldFail function in the standard assertions, so you don't even need to define it yourself. Their implementation is the same as what I have suggested here. They don't include a shouldNotFail, though, presumably because the usefulness is a bit questionable. Wrapping a test in "shouldNotFail" makes it fail if the assertion fails and pass if the assertion passes, which is... exactly what it would have done if you didn't wrap it in anything at all.
To test your own assertion functions, you can use it like this:
shouldFail { 1 shouldBe 2 }
If you look at the source code for Kotest itself, you'll find many unit tests for the built-in assertion functions using exactly this approach.

Getting llvm::LoopInfo from (non-LLVM) code?

For the development of my own Pass I want to write unit tests - i have lots of 'pure' helper methods, so they seem ideal candidates for unit test. But some of them require an instance of llvm::LoopInfo as an argument.
In my (Function-)Pass I just use
void getAnalysisUsage(llvm::AnalysisUsage &AU) const override {
AU.setPreservesCFG();
AU.addRequired<llvm::LoopInfoWrapperPass>();
}
...
llvm::LoopInfo &loopInfo = getAnalysis<LoopInfoWrapperPass>(F).getLoopInfo();
to get this information object.
In my unit test I currently parse my llvm::Function void foo() (that I want to run my analysis on) from disk like this:
llvm::SMDiagnostic Err;
llvm::LLVMContext Context;
std::unique_ptr<llvm::Module> module(parseIRFile(my_bc_filename, Err, Context));
llvm::Function* foo = module.operator*().getFunction("foo");
to finalize my test I would have to fill in following stub:
llvm::LoopInfo& = /*run LoopInfoWrapperPass on foo and return LoopInfo element */;
My first attempts were based on using the PassManager<Function> (in Header "llvm/IR/PassManager.h"), AnalysisManager<Function>, and the class LoopInfoWrapperPass, but I couldn't find any example usage online for LLVM 4.0 - and older examples seemed to be using a previous version of PassManager, and I did not see how to make use of the LegacyPassManager. I tried to look into the sources for PassManager but could not make enough sense of the typedefs and template arguments (and they are increasing my irrational dislike for C++ as a language).
How do I fill in that stub? How do I call this Analysis Pass (and get LoopInfo) in my plain C++ code?
PS: There are more passes other than LoopInfoWrapperPass I need to use, but I'm assuming the way should be transferable to any Analysis Pass.
PPS: I'm using googletest as a unit test framework, with a CMake build configuration that makes the unit tests their own target, and I'm building my Pass out-of-tree against binary libs of LLVM 4.0.1, if any of that is somehow relevant.
I am not sure how you have your unit tests structured, but looking around in the LLVM source tree is a good idea.
One example can be found in CFGTest.cpp here.
You need to create the PassManager and the pipeline yourself. From my short experience on this, it works well for small tests, but once you need anything bigger or pass data in/out it's really restricting, since the LoopInfo data have only meaning within the pipeline (aka runOn() methods and friends).
Otherwise, you might want to opt (no pun intended) for the simpler, IMHO, method of creating the set of the required analysis yourself (only dominators in the case of LoopInfo) without using the pass manager infrastructure. An example of this can be seen here.
Hope this helps.

How to write test for C++ templates?

Suppose I am writing a template library consisting of a function template
template<T> void f(T);
with the requirement that it works with a predefined set of classes A, B, C, and D, e.g., the following must compile:
template<> void f(A);
template<> void f(B);
template<> void f(C);
template<> void f(D);
Which test framework can I use to write test cases that captures this requirement at runtime instead of failing at compilation of the test code? In another word, I would like the framework to instantiate the templates at runtime and produce a nicely formatted error report if a subset of them fails.
I know I can forego test frameworks altogether and simply write a simple cc file containing the 4 lines above. But I was hoping I could incorporate this requirement into regular, standard test cases for generation of test status reports. For example,
test f works with A: passed.
test f works with B: passed.
test f works with C: failed! Cannot cast type C!
test f works with D: passed.
3 of 4 tests passed.
1 of 4 tests failed.
Write a test case that spawns the compiler... that's how e.g. autoconf tests for existence of features.
I don't understand why failing at runtime is preferable to failing at compile time. The earlier you fail in the unit testing process the better. It is preferable to have your unit tests not compile than fail. Its even easier to fix, In fact it probably won't even be committed to source control. Your unit test should just include those four lines and assert true at the end. Note this isn't the way I would go about doing it myself.
C++ templates are a compile time feature. In many cases they will fail at compile time, by design. You simply can't get around this without doing something really crazy.
However, you're also going to want to know that your template specializations are correct, because the specializations override the behavior you would otherwise get from the template. So test the specializations. But realize you will never get around the compile time aspects of templates.
Based on what you're trying to test here, checking if the thing can compile is the only sensible test you can perform.
Testing should not be for the sake of testing, but to ensure functional correctness. If you want to have proper tests around your class, you should write tests that verify the functionality of your template with all of the 4 different classes it can be compiled with.

Is Assert.Fail() considered bad practice?

I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body.
When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why?
#Martin Meredith
That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first.
#Jimmeh
That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out.
#Matt Howells
Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case
#Mitch Wheat
That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it.
For this scenario, rather than calling Assert.Fail, I do the following (in C# / NUnit)
[Test]
public void MyClassDoesSomething()
{
throw new NotImplementedException();
}
It is more explicit than an Assert.Fail.
There seems to be general agreement that it is preferable to use more explicit assertions than Assert.Fail(). Most frameworks have to include it though because they don't offer a better alternative. For example, NUnit (and others) provide an ExpectedExceptionAttribute to test that some code throws a particular class of exception. However in order to test that a property on the exception is set to a particular value, one cannot use it. Instead you have to resort to Assert.Fail:
[Test]
public void ThrowsExceptionCorrectly()
{
const string BAD_INPUT = "bad input";
try
{
new MyClass().DoSomething(BAD_INPUT);
Assert.Fail("No exception was thrown");
}
catch (MyCustomException ex)
{
Assert.AreEqual(BAD_INPUT, ex.InputString);
}
}
The xUnit.Net method Assert.Throws makes this a lot neater without requiring an Assert.Fail method. By not including an Assert.Fail() method xUnit.Net encourages developers to find and use more explicit alternatives, and to support the creation of new assertions where necessary.
It was deliberately left out. This is Brad Wilson's reply as to why is there no Assert.Fail():
We didn't overlook this, actually. I
find Assert.Fail is a crutch which
implies that there is probably an
assertion missing. Sometimes it's just
the way the test is structured, and
sometimes it's because Assert could
use another assertion.
I've always used Assert.Fail() for handling cases where you've detected that a test should fail through logic beyond simple value comparison. As an example:
try
{
// Some code that should throw ExceptionX
Assert.Fail("ExceptionX should be thrown")
}
catch ( ExceptionX ex )
{
// test passed
}
Thus the lack of Assert.Fail() in the framework looks like a mistake to me. I'd suggest patching the Assert class to include a Fail() method, and then submitting the patch to the framework developers, along with your reasoning for adding it.
As for your practice of creating tests that intentionally fail in your workspace, to remind yourself to implement them before committing, that seems like a fine practice to me.
I use MbUnit for my Unit Testing. They have an option to Ignore tests, which show up as Orange (rather than Green or Red) in the test suite. Perhaps xUnit has something similar, and would mean you don't even have to put any assert into the method, because it would show up in an annoyingly different colour making it hard to miss?
Edit:
In MbUnit it is in the following way:
[Test]
[Ignore]
public void YourTest()
{ }
This is the pattern that I use when writting a test for code that I want to throw an exception by design:
[TestMethod]
public void TestForException()
{
Exception _Exception = null;
try
{
//Code that I expect to throw the exception.
MyClass _MyClass = null;
_MyClass.SomeMethod();
//Code that I expect to throw the exception.
}
catch(Exception _ThrownException)
{
_Exception = _ThrownException
}
finally
{
Assert.IsNotNull(_Exception);
//Replace NullReferenceException with expected exception.
Assert.IsInstanceOfType(_Exception, typeof(NullReferenceException));
}
}
IMHO this is a better way of testing for exceptions over using Assert.Fail(). The reason for this is that not only do I test for an exception being thrown at all but I also test for the exception type. I realise that this is similar to the answer from Matt Howells but IMHO using the finally block is more robust.
Obviously it would still be possible to include other Assert methods to test the exceptions input string etc. I would be grateful for your comments and views on my pattern.
Personally I have no problem with using a test suite as a todo list like this as long as you eventually get around to writing the test before you implement the code to pass.
Having said that, I used to use this approach myself, although now I'm finding that doing so leads me down a path of writing too many tests upfront, which in a weird way is like the reverse problem of not writing tests at all: you end up making decisions about design a little too early IMHO.
Incidentally in MSTest, the standard Test template uses Assert.Inconclusive at the end of its samples.
AFAIK the xUnit.NET framework is intended to be extremely lightweight and yes they did cut Fail deliberately, to encourage the developer to use an explicit failure condition.
Wild guess: withholding Assert.Fail is intended to stop you thinking that a good way to write test code is as a huge heap of spaghetti leading to an Assert.Fail in the bad cases. [Edit to add: other people's answers broadly confirm this, but with quotations]
Since that's not what you're doing, it's possible that xUnit.Net is being over-protective.
Or maybe they just think it's so rare and so unorthogonal as to be unnecessary.
I prefer to implement a function called ThisCodeHasNotBeenWrittenYet (actually something shorter, for ease of typing). Can't communicate intention more clearly than that, and you have a precise search term.
Whether that fails, or is not implemented (to provoke a linker error), or is a macro that doesn't compile, can be changed to suit your current preference. For instance when you want to run something that is finished, you want a fail. When you're sitting down to get rid of them all, you may want a compile error.
With the good code I usually do:
void goodCode() {
// TODO void goodCode()
throw new NotSupportedOperationException("void goodCode()");
}
With the test code I usually do:
#Test
void testSomething() {
// TODO void test Something
Assert.assert("Some descriptive text about what to test")
}
If using JUnit, and don't want to get the failure, but the error, then I usually do:
#Test
void testSomething() {
// TODO void test Something
throw new NotSupportedOperationException("Some descriptive text about what to test")
}
Beware Assert.Fail and its corrupting influence to make developers write silly or broken tests. For example:
[TestMethod]
public void TestWork()
{
try {
Work();
}
catch {
Assert.Fail();
}
}
This is silly, because the try-catch is redundant. A test fails if it throws an exception.
Also
[TestMethod]
public void TestDivide()
{
try {
Divide(5,0);
Assert.Fail();
} catch { }
}
This is broken, the test will always pass whatever the outcome of the Divide function. Again, a test fails if and only if it throws an exception.
If you're writing a test that just fails, and then writing the code for it, then writing the test. This isn't Test Driven Development.
Technically, Assert.fail() shouldn't be needed if you're using test driven development correctly.
Have you thought of using a Todo List, or applying a GTD methodology to your work?
MS Test has Assert.Fail() but it also has Assert.Inconclusive(). I think that the most appropriate use for Assert.Fail() is if you have some in-line logic that would be awkward to put in an assertion, although I can't even think of any good examples. For the most part, if the test framework supports something other than Assert.Fail() then use that.
I think you should ask yourselves what (upfront) testing should do.
First, you write a (set of) test without implmentation.
Maybe, also the rainy day scenarios.
All those tests must fail, to be correct tests:
So you want to achieve two things:
1) Verify that your implementation is correct;
2) Verify that your unit tests are correct.
Now, if you do upfront TDD, you want to execute all your tests, also, the NYI parts.
The result of your total test run passes if:
1) All implemented stuff succeeds
2) All NYI stuff fails
After all, it would be a unit test ommision if your unit tests succeeds whilst there is no implementation, isnt it?
You want to end up with something of a mail of your continous integration test that checks all implemented and not implemented code, and is sent if any implemented code fails, or any not implemented code succeeds. Both are undesired results.
Just write an [ignore] tests wont do the job.
Neither, an asserts that stops an the first assert failure, not running other tests lines in the test.
Now, how to acheive this then?
I think it requires some more advanced organisation of your testing.
And it requires some other mechanism then asserts to achieve these goals.
I think you have to split up your tests and create some tests that completly run but must fail, and vice versa.
Ideas are to split your tests over multiple assemblies, use grouping of tests (ordered tests in mstest may do the job).
Still, a CI build that mails if not all tests in the NYI department fail is not easy and straight-forward.
Why would you use Assert.Fail for saying that an exception should be thrown? That is unnecessary. Why not just use the ExpectedException attribute?
This is our use case for Assert.Fail().
One important goal for our Unit tests is that they don't touch the database.
Sometimes mocking doesn't happen properly, or application code is modified and a database call is inadvertently made.
This can be quite deep in the call stack. The exception may be caught so it won't bubble up, or because the tests are running initially with a database the call will work.
What we've done is add a config value to the unit test project so that when the database connection is first requested we can call Assert.Fail("Database accessed");
Assert.Fail() acts globally, even in different libraries. This therefore acts as a catch-all for all of the unit tests.
If any one of them hits the database in a unit test project then they will fail.
We therefore fail fast.