How does one write function stubs for testing Rust modules? - unit-testing

In my past experience of testing C code, the function stub is pretty much mandatory. In my safety-critical line of work, I am usually required to test everything - even abstractive functions which simply call a build-specific implementation argument for argument, return for return.
I have searched high and low for ways to, in Rust, stub functions external to the module under test, but cannot find a solution.
There are several libraries for trait-mocking, but what little I have read about this topic suggests that it is not what I am looking for.
The other suggestion is that functions I test which call external functions have those functions passed in, allowing the test simply to pass the desired pseudostub function. This seems to be very inflexible in terms of the data passing in and out of the pseudostub and causes one's code to be polluted with function-reference arguments at every level - very undesirable when the function under test would never call anything except one operational function or the stub. You are writing operational code to fit in with the limitations of the testing system.
This seems so very basic. Surely there is a way to stub external functions with Rust and Cargo?

You can try and use mock crates like mockall, which I think is the more complete out there, but still might need some time getting used to.
Without a mock crate, I would suggest mocking the traits/structs in an other, and then bringing then into scope with the #[cfg(test)] attribute. Of course this would make it mandatory for you to annotate the production use statement with `#[cfg(not(test))]. for example:
If you're using an external struct ExternalStruct from external-crate with a method external_method you would have something like:
file real_code.rs
#[cfg(not(test))]
use external-crate::ExternalStruct;
#[cfg(test)]
use test_mocks::ExternalStruct;
fn my_function() {
//...
ExternalTrait::external_method();
}
#[test]
fn test_my_function(){
my_function();
}
file test_mocks.rs:
pub Struct ExternalStruct {}
impl ExternalStruct {
pub fn external_method() {
//desired mock behaviour
}
}
On running your test the ExternalStruct from the test_mocks will be used and the real dependency otherwise.

Related

unit testing C++11 closures

Is there any precedence for doing unit testing on C++ closures?
Functions that I write usually start out as closures defined near their point of use, and then (maybe) graduate to full functions later. This is nice for keeping interfaces clean and makes it easier to read the code in a linear fashion, but it undermines writing unit tests.
Are there any tricks or C++ unit testing frameworks that can handle, say, some little function for computing some geometry that is defined as a closure within my main()?
I would think you should be testing functions, not lambda functions. If a function contains lambda functions then they are implementation details. If you are reusing lambda functions by creating them as variables, then those are easily unit tested as functions.
Eg.
auto lambda = [](/* params */){/* stuff */}; // this can be unit tested
void func() // this can be unit tested
{
// the lambda is an implementation detail of the function
sort(/* stuff */, [](/* params */){/* stuff */});
}
TL;DR: No.
In order to unit test the closure, you would have to give it a name that you can refer to, by assigning it to a variable.
If it is complicated enough to unit test on its own, you should extract a method and test that instead.
Short of that, you can always unit test the closure indirectly, through the method or function which contains it.
In short, no. but..
You can test the code that use the closures. The fact that closures are embedded into source code and you don't have any reflection mechanism prevents you from unit-testing them (however you have many ways to test stuff, not only unit tests), however usually the code using closures is more compact so as long we test the whole block using the closures it is fine using them. I tend to write closure that are only few lines of code, infact you could actually create a function (called by the closure) and unit test the function itself ;).
int function(MyClass *){ // unit test here
}
//...
void MyClass::method(){ // ... and unit test method
auto f = [this] () { return function(this);};
applyFunctorOnCollection(f);
}

Gtest with large C and C ++ codebase

I am into a project where we have a large codebase and currently it has no unit tests framework at all.The code we are working on would be finally run on a box which acts as a switch/router/firewall.
So I am working on a piece of code which needs to be unit-tested using Gtest.
the problem I have with this is mocking the variables in order to test the function itself.
For eg I have a function which uses like 4 pointers to different objects and uses couple of global variables .In order to test different paths in code I need to initialize almost the entire state machien/values of dependent variables.
Adding to the complexity as it is true in a large codebase this function/method I have written uses a bunch of other routines/methods which needs to be tested as well. Each of those needs to be uni-tested as well with each of them having their own dependencies.
I am not sure whether I am approching the problem right or is it the case that gtest may not be the right tool for testing such large code-base.
If anyone has experience with say testing say a call-stack say
function A {
code
code
function B
code
code
function C
code
}
function B
{
function D
code
function E
}
function C{
code
function F
function G
code
}
something like this.How do I test all these function A-F ?? What is a good strategy ??
First thing is refactoring the code so that testable pieces are isolated. In particular, this means removing the access to globals. Example:
int global;
int function() {
int r = foo();
global += r / 2;
bar(r);
return 42;
}
Removing the global object means converting it to an input parameter:
int real_function(int* target) {
assert(target);
int r = foo();
*target += r / 2;
bar(r);
return 42;
}
Then of course the remaining code will stop compiling, so you add a backward-compatibility cludge:
int global_bar;
// #deprecated, use real_function() directly
int function() {
return real_function(&global_bar);
}
Using that, you step up the call chain, extracting the dependencies and hopefully one day removing the last call to the variant that requires the globals. In the meantime, you can write tests for the functions that don't depend on global stuff any longer. Note that for C++, you would use references instead of pointers and probably pass the required external objects to the class constructor. This is also called dependency injection, make sure to research that term for a thorough understanding.
Another way to test functions touching globals is to use the setup function of the test to reset the global to a known state. This still requires linking in the global though, which may prove difficult. And not using globals may make the codebase much better in the first place, so accepting that also sends the wrong message.
Ulrich Eckhardt essentially says, "You need to get rid of globals to make easily testable code". But you really should go further.
For any global function you want to test you should look at
The globals it accesses.
The parameters it uses.
the functions it calls.
Then consider:
Converting the functions it calls to calls to one or more interfaces, and pass those as parameters.
Converting the globals to parameters, or function calls on an interface.
If your function is a function on an object rather than a global function, you can consider additionally:
making the globals member variables, and passing them to the constructor
making the functions it calls virtual member functions
The final thing I'd consider for making a function testable, is whether it belongs in a class.
Once all these are addressed you can usually mock the required bits easily. If you're using gtest, you may be able to use gmock to make this easier. (I've used gmock with gtest before and its pretty painless.)
(And yes, I've taken this approach on large code-bases before... its usually pretty painful to start with, but once you get used to it and the code starts to get more testable - things will improve.)

Ignoring mock calls during setup phase

I often face the problem that mock objects need to be brought in a certain state before the "interesting" part of a test can start.
For example, let's say I want to test the following class:
struct ToTest
{
virtual void onEnable();
virtual void doAction();
};
Therefore, I create the following mock class:
struct Mock : ToTest
{
MOCK_METHOD0(onEnable, void());
MOCK_METHOD0(doAction, void());
};
The first test is that onEnable is called when the system that uses a ToTest object is enabled:
TEST(SomeTest, OnEnable)
{
Mock mock;
// register mock somehow
// interesting part of the test
EXPECT_CALL(mock, onEnable());
EnableSystem();
}
So far, so good. The second test is that doAction is called when the system performs an action and is enabled. Therefore, the system should be enabled before the interesting part of the test can start:
TEST(SomeTest, DoActionWhenEnabled)
{
Mock mock;
// register mock somehow
// initialize system
EnableSystem();
// interesting part of the test
EXPECT_CALL(mock, doAction());
DoSomeAction();
}
This works but gives an annoying warning about an uninteresting call to onEnable. There seem to be two common fixes of this problem:
Using NiceMock<Mock> to suppress all such warnings; and
Add an EXPECT_CALL(mock, onEnable()) statement.
I don't want to use the first method since there might be other uninteresting calls that really should not happen. I also don't like the second method since I already tested (in the first test) that onEnable is called when the system is enabled; hence, I don't want to repeat that expectation in all tests that work on enabled systems.
What I would like to be able to do is say that all mock calls up to a certain point should be completely ignored. In this example, I want expectations to be only checked starting from the "interesting part of the test" comment.
Is there a way to accomplish this using Google Mock?
The annoying thing is that the necessary functions are there: gmock/gmock-spec-builders.h defines Mock::AllowUninterestingCalls and others to control the generation of warnings for a specific mock object. Using these functions, it should be possible to temporarily disable warnings about uninteresting calls.
That catch is, however, that these functions are private. The good thing is that class Mock has some template friends (e.g., NiceMock) that can be abused. So I created the following workaround:
namespace testing
{
// HACK: NiceMock<> is a friend of Mock so we specialize it here to a type that
// is never used to be able to temporarily make a mock nice. If this feature
// would just be supported, we wouldn't need this hack...
template<>
struct NiceMock<void>
{
static void allow(const void* mock)
{
Mock::AllowUninterestingCalls(mock);
}
static void warn(const void* mock)
{
Mock::WarnUninterestingCalls(mock);
}
static void fail(const void* mock)
{
Mock::FailUninterestingCalls(mock);
}
};
typedef NiceMock<void> UninterestingCalls;
}
This lets me access the private functions through the UninterestingCalls typedef.
The flexibility you're looking for is not possible in gmock, by design. From the gmock Cookbook (emphasis mine):
[...] you should be very cautious about when to use naggy or strict mocks, as they tend to make tests more brittle and harder to maintain. When you refactor your code without changing its externally visible behavior, ideally you should't need to update any tests. If your code interacts with a naggy mock, however, you may start to get spammed with warnings as the result of your change. Worse, if your code interacts with a strict mock, your tests may start to fail and you'll be forced to fix them. Our general recommendation is to use nice mocks (not yet the default) most of the time, use naggy mocks (the current default) when developing or debugging tests, and use strict mocks only as the last resort.
Unfortunately, this is an issue that we, and many other developers, have encountered. In his book, Modern C++ Programming with Test-Driven Development, Jeff Langr writes (Chapter 5, on Test Doubles):
What about the test design? We split one test into two when we changed from a hand-rolled mock solution to one using Google Mock. If we expressed everything in a single test, that one test could set up the expectations to cover all three significant events. That’s an easy fix, but we’d end up with a cluttered test.
[...]
By using NiceMock, we take on a small risk. If the code later somehow changes to invoke another method on the [...] interface, our tests aren’t going to know about it. You should use NiceMock when you need it, not habitually. Seek to fix your design if you seem to require it often.
You might be better off using a different mock class for your second test.
class MockOnAction : public ToTest {
// This is a non-mocked function that does nothing
virtual void onEnable() {}
// Mocked function
MOCK_METHOD0(doAction, void());
}
In order for this test to work, you can have onEnable do nothing (as shown above). Or it can do something special like calling the base class or doing some other logic.
virtual void onEnable() {
// You could call the base class version of this function
ToTest::onEnable();
// or hardcode some other logic
// isEnabled = true;
}

static methods and unit tests

I've been reading that static methods tend to be avoided when using TDD because they tend to be hard to mock. I find though, that the easiest thing to unit test is a static method that has simple functionality. Don't have to instantiate any classes, encourages methods that a simple, do one thing, are "standalone" etc.
Can someone explain this discrepancy between TDD best practices and pragmatic ease?
thanks,
A
A static method is easy to test, but something that directly calls a static method generally is not easy to test independent of the static method it depends on. With a non-static method you can use a stub/mock/fake instance to ease testing, but if the code you're testing calls static methods it's effectively "hard-wired" to that static method.
The answer to the asked question is, in my opinion "Object Oriented seems to be all that TDD people think about."
Why? I don't know. Maybe they are all Java programmers who've been infected with the disease of making everything rely on six indirection layers, dependency injection and interface adapters.
Java programmers seem to love to make everything difficult up front in order to "save time later."
I advise applying some Agile principles to your TDD: If it isn't causing a problem then don't fix it. Don't over design.
In practice I find that if the static methods are tested well first then they are not going to be the cause of bugs in their callers.
If the static methods execute quickly then they don't need a mock.
If the static methods work with stuff from outside the program, then you might need a mock method. In this case you'd need to be able to simulate many different kinds of function behavior.
If you do need to mock a static method remember that there are ways to do it outside of OO programming.
For example, you can write scripts to process your source code into a test form that calls your mock function. You could link different object files that have different versions of the function into the test programs. You could use linker tricks to override the function definition (if it didn't get inlined). I am sure there are some more tricks I haven't listed here.
It's easy to test the static method. The problem is that there is no way to isolate your other code from that static method when testing the other code. The calling code is tightly-coupled to the static code.
A reference to a static method cannot be mocked by many mocking frameworks nor can it be overridden.
If you have a class that is making lots of static calls, then to test it you have to configure the global state of the application for all of those static calls - so maintenance becomes a nightmare. And if your test fails, then you don't know which bit of code caused the failure.
Getting this wrong, is one of the reasons that many developers think TDD is nonsense. They put in a huge maintenance effort for test results that only vaguely indicate what went wrong. If they'd only reduced the coupling between their units of code, then maintenance would be trivial and the test results specific.
That advice is true for the most part.. but not always. My comments are not C++ specific..
writing tests for static methods (which are pure/stateless functions): i.e. the work off the inputs to produce a consistent result. e.g. Add below - will always give the same value given a particular set of inputs. There is no problem in writing tests for these or code that calls such pure static methods.
writing tests for static methods which consume static state : e.g. GetAddCount() below. Calling it in multiple tests can yield different values. Hence one test can potentially harm the execution of another test - tests need to be independent. So now we need to introduce a method to reset static state such that each test can start from a clean slate (e.g. something like ResetCount()).
Writing tests for code that accesses static methods but no source-code access to the dependency: Once again depends on the properties of the static methods themselves. However if they are gnarly, you have a difficult dependency. If the dependency is an object, then you could add a setter to the dependent type and set/inject a fake object for your tests. When the dependency is static, you may need a sizable refactoring before you can get tests running reliably. (e.g. Add an object middle-man dependency that delegates to the static method. Now plugin a fake middle-man for your tests)
Lets take an example
public class MyStaticClass
{
static int __count = 0;
public static int GetAddCount()
{ return ++__count; }
public static int Add(int operand1, int operand2)
{ return operand1 + operand2; }
// needed for testability
internal static void ResetCount()
{
__count = 0;
}
}
...
//test1
MyStaticClass.Add(2,3); // => 5
MyStaticClass.GetAddCount(); // => 1
// test2
MyStaticClass.Add(2,3); // => 5
//MyStaticClass.ResetCount(); // needed for tests
MyStaticClass.GetAddCount(); // => unless Reset is done, it can differ from 1

How to test the function behavior in unit test?

If a function just calls another function or performs actions. How do I test it? Currently, I enforce all the functions should return a value so that I could assert the function return values. However, I think this approach mass up the API because in the production code. I don't need those functions to return value. Any good solutions?
I think mock object might be a possible solution. I want to know when should I use assert and when should I use mock objects? Is there any general guide line?
Thank you
Let's use BufferedStream.Flush() as an example method that doesn't return anything; how would we test this method if we had written it ourselves?
There is always some observable effect, otherwise the method would not exist. So the answer can be to test for the effect:
[Test]
public void FlushWritesToUnderlyingStream()
{
var memory = new byte[10];
var memoryStream = new MemoryStream(memory);
var buffered = new BufferedStream(memoryStream);
buffered.Write(0xFF);
Assert.AreEqual(0x00, memory[0]); // not yet flushed, memory unchanged
buffered.Flush();
Assert.AreEqual(0xFF, memory[0]); // now it has changed
}
The trick is to structure your code so that these effects aren't too hard to observe in a test:
explicitly pass collaborator objects,
just like how the memoryStream is passed
to the BufferedStream in the constructor.
This is called dependency
injection.
program against an interface, just
like how BufferedStream is programmed
against the Stream interface. This enables
you to pass simpler, test-friendly implementations (like MemoryStream in this case) or use a mocking framework (like MoQ or RhinoMocks), which is all great for unit testing.
Sorry for not answering straight but ... are you sure you have the exact balance in your testing?
I wonder if you are not testing too much ?
Do you really need to test a function that merely delegates to another?
Returns only for the tests
I agree with you when you write you don't want to add return values that are useful only for the tests, not for production. This clutters your API, making it less clear, which is a huge cost in the end.
Also, your return value could seem correct to the test, but nothing says that the implementation is returning the return value that corresponds to the implementation, so the test is probably not proving anything anyway...
Costs
Note that testing has an initial cost, the cost of writing the test.
If the implementation is very easy, the risk of failure is ridiculously low, but the time spend testing still accumulates (over hundred or thousands cases, it ends up being pretty serious).
But more than that, each time you refactor your production code, you will probably have to refactor your tests also. So the maintenance cost of your tests will be high.
Testing the implementation
Testing what a method does (what other methods it calls, etc) is critized, just like testing a private method... There are several points made:
this is fragile and costly : any code refactoring will break the tests, so this increases the maintenance cost
Testing a private method does not bring much safety to your production code, because your production code is not making that call. It's like verifying something you won't actually need.
When a code delegates effectively to another, the implementation is so simple that the risk of mistakes is very low, and the code almost never changes, so what works once (when you write it) will never break...
Yes, mock is generally the way to go, if you want to test that a certain function is called and that certain parameters are passed in.
Here's how to do it in Typemock (C#):
Isolate.Verify.WasCalledWithAnyArguments(()=> myInstance.WeatherService("","", null,0));
Isolate.Verify.WasCalledWithExactArguments(()=> myInstance. StockQuote("","", null,0));
In general, you should use Assert as much as possible, until when you can't have it ( For example, when you have to test whether you call an external Web service API properly, in this case you can't/ don't want to communicate with the web service directly). In this case you use mock to verify that a certain web service method is correctly called with correct parameters.
"I want to know when should I use assert and when should I use mock objects? Is there any general guide line?"
There's an absolute, fixed and important rule.
Your tests must contain assert. The presence of assert is what you use to see if the test passed or failed. A test is a method that calls the "component under test" (a function, an object, whatever) in a specific fixture, and makes specific assertions about the component's behavior.
A test asserts something about the component being tested. Every test must have an assert, or it isn't a test. If it doesn't have assert, it's not clear what you're doing.
A mock is a replacement for a component to simplify the test configuration. It is a "mock" or "imitation" or "false" component that replaces a real component. You use mocks to replace something and simplify your testing.
Let's say you're going to test function a. And function a calls function b.
The tests for function a must have an assert (or it's not a test).
The tests for a may need a mock for function b. To isolate the two functions, you test a with a mock for function b.
The tests for function b must have an assert (or it's not a test).
The tests for b may not need anything mocked. Or, perhaps b makes an OS API call. This may need to be mocked. Or perhaps b writes to a file. This may need to be mocked.