Should you know your dependencies in advance for each test - unit-testing

I am exploring TDD and SOLID principles. Say I have a service that I create a failing test for before writing the implementation.
public interface IService {
bool DoSomething();
}
public class Service : IService {
bool DoSomething() {
...
}
}
[TestMethod]
public void ServiceImplementationTest()
{
var implementation = new Service();
bool result = implementation.DoSomething();
Assert.IsTrue(result);
}
When I first write the test, I'm unaware of the dependencies of this service, so the constructor takes no arguments as I don't need to inject any dependencies.
However, as I write the implementation, I realise I need a certain dependency, and so add a reference to that dependency in the constructor. To keep the test code compiling and failing, I then have to go back to the test, and modify it to create a fake implementation.
public class Service : IService {
public Service(IDependency dependency) {
_dependency = dependency;
}
bool DoSomething() {
... use _dependency ...
return result;
}
}
[TestMethod]
public void ServiceImplementationTest()
{
var implementation = new Service(*** new MockDependency() ***);
bool result = implementation.DoSomething();
Assert.IsTrue(result);
}
Is this just a fact of life? Should I know all dependencies before writing the test? What happens when I want to write a new implementation with different dependencies, is it necessary to write a new test for each implementation, even though the correctness of the implementation hasn't changed?

Should you know your dependencies in advance for each test
Not necessarily, no.
What you are doing when you design "test first" is that you are exploring a possible API. So the following code
var implementation = new Service();
bool result = implementation.DoSomething();
Assert.IsTrue(result);
says, among other things, that the public API should allow you to create an instance of Service without knowing anything about its dependencies.
However, as I write the implementation, I realise I need a certain dependency, and so add a reference to that dependency in the constructor. To keep the test code compiling and failing, I then have to go back to the test, and modify it to create a fake implementation.
So notice two things here
This isn't a backwards compatible change...
Which means it isn't a refactoring
So part of your problem is that you are introducing the changes you want in a backwards incompatible way. If you were refactoring, you would have a step where your constructors would look something like
public Service() {
this(new IDependency() {
// default IDependencyImplementation here
});
}
Service(IDependency dependency) {
this.dependency = dependency;
}
At this point, you have two independent decisions
should Service(IDependency) be part of the public API
If so, then you start writing tests that force you to expose that constructor
should Service() be deprecated
and if it should, then you plan to delete the tests that depend on it when you remove Service() from the public API.
Note: that is a lot of unnecessary steps if you haven't shared/released the public API yet. Unless you are deliberately adopting the discipline that calibrated tests are immutable, it is usually more practical to hack the test to reflect the most recent draft of your API.
Don't forget to re-calibrate the test after you change it, though; any change to the test should necessarily trigger a refresh of the Red/Green cycle to ensure that the revised test is still measuring what you expect. You should never be publishing a test that you haven't calibrated.
A pattern that is sometimes useful is to separate the composition of the system under test from the automated checks
public void ServiceImplementationTest()
{
var implementation = new Service();
check(implementation);
}
void check(Service implementation) {
bool result = implementation.DoSomething();
Assert.IsTrue(result);
}
Another alternative is to write a spike - a draft of a possible implementation without tests, with purpose of exploring to better understand the constraints of the problem, which is thrown away when the learning exercise is completed.
Could you quickly describe what you mean by test calibration? Google isn't helping much. Is it making sure that tests fail when they are supposed to?
Yes: making sure that the fail when they are supposed so, making sure when they do fail that the messages reported are suitable, and of course making sure that they pass when they are supposed to.
I wrote some about the idea here.

Related

C++: Separating object's creation from use (for testing purposes)

Suppose I have code like below. http_client is an external dependency (a 3rd party API) I don't have control over. transaction_handler is a class I control and would like to write unit tests for.
// 3rd party
class http_client
{
public:
std::string get(std::string url)
{
// makes an HTTP request and returns response content as string
// throws if status code is not 200
}
};
//code to be tested
enum class transaction_kind { sell, buy };
enum class status { ok, error };
class transaction_handler
{
private:
http_client client;
public:
status issue_transaction(transaction_kind action)
{
try
{
auto response =
client.get(std::string("http://fake.uri/") +
(action == transaction_kind::buy ? "buy" : "sell"));
return response == "OK" ? status::ok : status::error;
}
catch (const std::exception &)
{
return status::error;
}
}
};
Because http_client makes network calls I would like to be able to substitute it in my tests with a mock implementation which cuts off the network and allows for testing different conditions (ok/error/exception). Because transaction_handler is supposed to be internal I can modify it to make it testable but I wouldn't want to go over the border (i.e. I would like to avoid pointers or dynamic polymorphism if possible). Ideally I would like to use a kind of dependency injection where my tests would inject a mock http_client. I don't think I can/want to use a 'poor man's DI' where I would create an http_client implementation in the caller and pass it to the transaction_handler (by const reference? std::shared_ptr?) - because I don't control the http_client I would have to come up with an interface and -in the product code - I would have to wrap the http_client in a wrapper class that implements this interface and forwards the calls to the actual/wrapped http_client instance. In the test code I would create a mock implementation of that interface. The interface would have to be a pure abstract method which entails using runtime polymorphism which I wanted to avoid. Another option is to use templates. If I changed the transaction_handler class to look as follows:
template <typename T = http_client>
class transaction_handler
{
private:
T client;
public:
transaction_handler(const std::function<T()> &create) : client(create())
{}
status issue_transaction(transaction_kind action)
{
// same as above, omitted for brevity
}
}
I could now create a mock http_client class:
class http_client_mock
{
public:
std::string get(std::string url)
{
return std::string("OK");
}
};
and create the transaction_class object in my tests like this:
transaction_handler<http_client_mock> t(
[]() -> http_client_mock { return http_client_mock(); });
while I could use the following in my product code:
transaction_handler<> t1(
[]() -> http_client { return http_client(); });
While it seems to work and fullfill most of my requirements (even though I don't like the fact that the code instantiating transaction_handler need to be aware of the http_client type - maybe it can be somehow hidden as a factory class) - does it make sense at all? Or may be there are better ways of doing this kind of things? I spent a considerable amount of time looking for some simple DI patterns to make unit testing easier bud had hard time finding something that would suit my needs. Also, my background is mostly C so maybe I approach the problem from a wrong angle?
I'm maintaining a DI library, and your case is really interesting for me.
Mocking is about dynamic polymorphism or compile time mocking (at least when using C++). You pay 1 indirection to obtain ability to inject what you want (dependence only on 1 interface).
If you want to make code testable, the best way is using interfaces (pure virtual classes in C++ wich does not have interfaces) and inject dependencies only through constructor.
If you really want to avoid polymorphism (or you can't because of external API) you could still accept to have some code that is not fully testable.
Conventional way of doing things:
class ConcreteHttpClient : public virtual AbstractHttpClient { /*...*/}
class MockHttpClient : public virtual AbstractHttpClient{ /*...*/ }
You just choose what to inject based on needs ( I intentionally use "new" instead of showing some DI framwork at work).
Production code.
new TransactionHandler ( static_cast< AbstractService>( ConcreteService));
Unit testing the transaction handler
new TransactionHandler ( static_cast< AbstractService>( MockService));
If you later need to test some class using the transaction handler and the transaction handler implements a interface
class TransactionHandler: public virtual AbstractTransactionHandler { /*...*/}
You have just to create a TransactionHandlerMock inheriting from AbstractTransactionHandler
When you use interfaces the advantage is that you can use a Dependency Injection framework to avoid poor man's injection.
Compile time mocking.
What you proposed is a viable alternative, basically you assume a "static polymorphism" thanks to templates
Production code:
template <typename T = http_client>
class transaction_handler{/*...*/};
new transaction_handler( /*...*/ );
Unit test code:
using transaction_handler_mocked = transaction_handler< http_client_mock>;
new transaction_handler_mocked( /*...*/ );
However this has few Issues:
You are depending on "transaction_handler" type on each part of your production code so if you change it you have to recompile every file depending on it.
You can't inject a mocked handler as a mock itself, unless you change all classes depending on it to become templates accepting the handler.
Point number 2 means that basically in a complex project you have each class depending on each other, increasin compile times and forcing to whole recompiles of your project just for little changes
You are not using a Interface (pure virtual class) that means that you can still accidentally access fields or members of template parameters that were not intended to be accessed making your debuggin harder.
Other alternatives
Provide mock at link time, you don't have to mock a whole 3rd party library. Just the classes you are testing. Most IDEs are not thinked to work that way, but probably you can work around with some bash script or custom makefile. (In example, I do that for testing code dependent on C functions, in particular OpenGL)
Wrap interesting functionalities of libraries you want to test behind a class implementing a pure virtual class (don't know why you want to avoid it). You have good chances that you don't need to wrap all methods and you'll end with a smaller API to test against (just wrap parts you need to use, don't start up front wrapping the whole library)
Use a mocking framework, and possibly a dependency Injection framework.
Just write test routines that match whichever exports of http_client you're using. You're source will be linked in preference to any lib.
I think it's a good idea to mock it. Since http_client is an external dependency, I chose Typemock Isolator++ to handle with it. Look at the code below:
TEST_METHOD(FakeHttpClient)
{
//Arrange
string okStr = "OK";
http_client* mock_client = FAKE_ALL<http_client>();
WHEN_CALLED(mock_client->get(ANY_VAL(std::string))).Return(&okStr);
//Act
transaction_handler my_handler;
status result = my_handler.issue_transaction(transaction_kind::buy);
//Assert
Assert::AreEqual((int)status::ok, (int)result);
}
Method FAKE_ALL<> allows me to set the behavior of all http_client instances, so no injection needed. Simple API, the code looks accurate, and you don't need to change the production code.
Hope it helps!

tdd - should I mock here or use real implementation

I'm writing program arguments parser, just to get better in TDD, and I stuck with the following problem. Say I have my parser defined as follows:
class ArgumentsParser {
public ArgumentsParser(ArgumentsConfiguration configuration) {
this.configuration = configuration;
}
public void parse(String[] programArguments) {
// all the stuff for parsing
}
}
and I imagine to have ArgumentsConfiguration implementation like:
class ArgumentsConfiguration {
private Map<String, Class> map = new HashMap<String, Class>();
public void addArgument(String argName, Class valueClass) {
map.add(argName, valueClass);
}
// get configured arguments methods etc.
}
This is my current stage. For now in test I use:
#Test
public void shouldResultWithOneAvailableArgument() {
ArgumentsConfiguration config = prepareSampleConfiguration();
config.addArgument("mode", Integer.class);
ArgumentsParser parser = new ArgumentsParser(configuration);
parser.parse();
// ....
}
My question is if such way is correct? I mean, is it ok to use real ArgumentsConfiguration in tests? Or should I mock it out? Default (current) implementation is quite simple (just wrapped Map), but I imagine it can be more complicated like fetching configuration from kind of datasource. Then it'd be natural to mock such "expensive" behaviour. But what is preferred way here?
EDIT:
Maybe more clearly: should I mock ArgumentsConfiguration even without writing any implementation (just define its public methods), use mock for testing and deal with real implementation(s) later, or should I use the simplest one in tests, and let them cover this implementation indirectly. But if so, what about testing another Configuration implementation provided later?
Then it'd be natural to mock such "expensive" behaviour.
That's not the point. You're not mocking complex classes.
You're mocking to isolate classes completely.
Complete isolation assures that the tests demonstrate that classes follow their interface and don't have hidden implementation quirks.
Also, complete isolation makes debugging a failed test much, much easier. It's either the test, the class under test or the mocked objects. Ideally, the test and mocks are so simple they don't need any debugging, leaving just the class under test.
The correct answer is that you should mock anything that you're not trying to test directly (e.g.: any dependencies that the object under test has that do not pertain directly to the specific test case).
In this case, because your ArgumentsConfiguration is so simple, I'd recommend using the real implementation until your requirements demand something more complicated. There doesn't seem to be any logic in your ArgumentsConfiguration class, so it's safe to use the real object. If the time comes where the configuration is more complicated, then an approach you should probably take would be not to create a configuration that talks to some data source, but instead generate the ArgumentsConfiguration object from that datasource. Then you could have a test that makes sure it generates the configuration properly from the datasource and you don't need unnecessary abstractions.

Does the self-shunt testing pattern violate the Single Responsibility Principle?

I've used the self-shunt unit testing pattern a few times over the years. As I was explaining it to someone recently they argued that it violated the SRP. The argument is that the test class can now be changed for one of two reasons: when the test changes, or when the method signature on the interface that the test is implementing changes. After thinking about it for a while, it seems like this is a correct assessment, but I wanted to get other peoples' opinions. Thoughts?
Reference:
http://www.objectmentor.com/resources/articles/SelfShunPtrn.pdf
My take on this is that the test class technically violates the SRP, but it doesn't violate the spirit of SRP. The alternative to self-shunting is to have a mock class separate from the test class.
With the separate mock class you might think that it's all self contained and satisfies the SRP, however the semantic coupling to the mock class's attributes is still there. So, really, we didn't achieve any meaningful separation.
Taking the example out of the PDF:
public class ScannerTest extends TestCase implements Display
{
public ScannerTest (String name) {
super (name);
}
public void testScan () {
// pass self as a display
Scanner scanner = new Scanner (this);
// scan calls displayItem on its display
scanner.scan ();
assertEquals (new Item (“Cornflakes”), lastItem);
}
// impl. of Display.displayItem ()
void displayItem (Item item) {
lastItem = item;
}
private Item lastItem;
}
Now we make a Mock:
public class DisplayMock implements Display
{
// impl. of Display.displayItem ()
void displayItem (Item item) {
lastItem = item;
}
public Item getItem() {
return lastItem;
}
private Item lastItem;
}
public class ScannerTest extends TestCase
{
public ScannerTest (String name) {
super (name);
}
public void testScan () {
// pass self as a display
DisplayMock dispMock = new DisplayMock();
Scanner scanner = new Scanner (dispMock );
// scan calls displayItem on its display
scanner.scan ();
assertEquals (new Item (“Cornflakes”), dispMock.GetItem());
}
}
In practical terms (IMHO) the higher coupling of TestClass to DisplayMock is a greater evil than the violation of the SRP for TestClass. Besides, with the use mocking frameworks, this problem goes away completely.
EDIT I have just encountered a brief mention of self-shunt pattern in Robert C. Martin's excellent book Agile Principles, Patterns, and Practices in C#. Here is the snippet out of the book:
We can accomplish this by using an abstract interface for the database. One implementation of this abstract interface uses the real database. Another implementation is test code written to simulate the behavior of the database and to check that the database calls are being made correctly. Figure 29-5 shows the structure. The PayrollTest module tests the PayrollModule by making calls to it and also implements the Database interface so that it can trap the calls that Payroll makes to the database. This allows PayrollTest to ensure that Payroll is behaving properly. It also allows PayrollTest to simulate many kinds of database failures and problems that are otherwise difficult to create. This is a testing pattern known as SELF-SHUNT, also sometimes known as mocking or spoofing.
So, the guy who coined the SRP (which is talked about in great detail in the same book) has no qualms using self-shunt pattern. In light of that, I'd say you are pretty safe from the OOP (Objected Orientated Police) when using this pattern.
In my opinion it is a violation, but a very minor one.
You test class is now a test class and a dependency for whatever you are testing.
However, is that a bad thing? For a couple of simple tests, probably not. As your number of test cases grows, you'll probably want to refactor and use a mock class to separate some of the concerns. (As the link you pasted says, self-shunt is a stepping stone to mocking). But if the number of test cases remains static and low, what's the problem?
I think a little pragmatism is required. Does it violate the SRP? Yes, but I'm guessing probably not as much as some of the code in the system you're testing. Do you need to do anything about it? No, not as long as the code is clear and maintainable, which for me is always the bottom line. SRP is a guideline, not a rule.
If the interface being implemented or shunted is changed, it's relatively likely that the test suite will have to change as well. So I don't really see it as a violation of SRP.
I prefer to have a little more control over the mocks/stubs that I'm creating. When I tried using the self shunt pattern I ended up making my test class more complex. By creating the mocks as locals within a test method I ended up having cleaner code.
FWIW unless you're using something powerful like C# (or python or an equivalent) your test code will change when you change an interface.

Why doesn't the following mocking with Ninject.Moq work?

I'm trying to run the following code with Ninject.Moq:
[TestMethod]
public void TestMethod1()
{
var kernel = new MockingKernel();
var engine = kernel.Get<ABC>();
//as I don't need to actually use the interfaces, I don't want
//to even have to bother about them.
Assert.AreEqual<string>("abc", engine.ToString());
}
And here is the ABC class definition:
public class ABC {
IA a;
IB b;
public ABC(IA a, IB b)
{
this.a = board;
this.b = war;
}
public override string ToString()
{
return "abc";
}
}
I'm getting the following exception:
System.ArgumentException: A matching
constructor for the given arguments
was not found on the mocked type. --->
System.MissingMethodException:
Constructor on type
'AbcProxya759aacd0ed049f3849aaa75e2a7bade'
not found.
Ok, this will make the code work:
[TestMethod]
public void TestMethod1()
{
var kernel = new MockingKernel();
kernel.Bind<Abc>().ToSelf();
var engine = kernel.Get<ABC>();
//as I don't need to actually use the interfaces, I don't want
//to even have to bother about them.
Assert.AreEqual<string>("abc", engine.ToString());
}
One has to bind Abc to itself, otherwise it will also get mocked and Moq only supports mocking parameterless classes, which is not the case.
It's a bit like understanding DI in the first
place :- small samples don't really get the whole point across.
An automocking container like Ninject.Moq (or similar test infrastructure libraries like AutoFixture) is hard to really explain with a simple example. I'd suggest reading all of Mark Seemann's posts on AutoFixture as a way of getting a feel fro the requirement.
So Ninject.Moq will deal with the chaining, N levels deep of a set of stub implementations of interfaces that are necessary to satisfy your System Under Test in the course of doing the thing your test is actually supposed to be testing.
In general, you want short easy to read, easy to grok tests with minimal complexity and interaction of fixtures under the cover (no big hierarchy of base classes, or 6 different magic methods doing wacky teardown and calling base classes). Normally this aim will mean you should keep your DI toolery miles away from your unit tests.
An automocking container should, like a chainsaw, only be used where you're going to get a signnificant real return (many shorter, easier to understand tests) for you investment (another tool to understand before others can read you tests, more debugging, more surprises, more complexity that'll lead to brittle, unmaintainable tests).

Unit testing void methods/mocking object tell-tale signs

When unit testing a codebase, what are the tell-tale signs that I need to utilise mock objects?
Would this be as simple as seeing a lot of calls to other objects in the codebase?
Also, how would I unit test methods which don't return values? So if I a method is returning void but prints to a file, do I just check the file's contents?
Mocking is for external dependencies, so that's literally everything, no? File system, db, network, etc...
If anything, I probably over use mocks.
Whenever a class makes a call to another, generally I mock that call out, and I verify that the call was made with the correct parameters. Else where, I'll have a unit test that checks the concrete code of the mocked out object behaves correctly.
Example:
[Test]
public void FooMoo_callsBarBaz_whenXisGreaterThan5()
{
int TEST_DATA = 6;
var bar = new Mock<Bar>();
bar.Setup(x => x.Baz(It.Is<int>(i == TEST_DATA)))
.Verifiable();
var foo = new Foo(bar.Object);
foo.moo(TEST_DATA);
bar.Verify();
}
...
[Test]
public void BarBaz_doesSomething_whenCalled()
{
// another test
}
The thing for me is, if I try to test lots of classes as one big glob, then there's usually tonnes of setup code. Not only is this quite confusing to read as you try to get your head around all the dependencies, it's very brittle when changes need to be made.
I much prefer small succinct tests. Easier to write, easier to maintain, easier to understand the intent of the test.
Mocks/stubs/fakes/test doubles/etc. are fine in unit tests, and permit testing the class/system under test in isolation. Integration tests might not use any mocks; they actually hit the database or other external dependency.
You use a mock or a stub when you have to. Generally this is because the class you're trying to test has a dependency on an interface. For TDD you want to program to interfaces, not implementations, and use dependency injection (generally speaking).
A very simple case:
public class ClassToTest
{
public ClassToTest(IDependency dependency)
{
_dependency = dependency;
}
public bool MethodToTest()
{
return _dependency.DoSomething();
}
}
IDependency is an interface, possibly one with expensive calls (database access, web service calls, etc.). A test method might contain code similar to:
// Arrange
var mock = new Mock<IDependency>();
mock.Setup(x => x.DoSomething()).Returns(true);
var systemUnderTest = new ClassToTest(mock.Object);
// Act
bool result = systemUnderTest.MethodToTest();
// Assert
Assert.That(result, Is.True);
Note that I'm doing state testing (as #Finglas suggested), and I'm only asserting against the system under test (the instance of the class I'm testing). I might check property values (state) or the return value of a method, as this case shows.
I recommend reading The Art of Unit Testing, especially if you're using .NET.
Unit tests are only for one piece of code that works autonomously within itself. This means that it doesn't depend on other objects to do its work. You should use mocks if you are doing Test-Driven programming or Test-First programming. You would create a mock (or stub as I like to call it) of the function you will be creating and set certain conditions for the test to pass. Originally the function returns false and the test fails, which is expected ... then you write the code to do the real work until it passes.
But what I think you are referring to is integration testing, not unit testing. In that case, you should use mocks if you are waiting for other programmers to finish their work and you currently don't have access to the functions or objects they are creating. If you know the interface, which hopefully you do otherwise mocking is pointless and a waste of time, then you can create a dumbed-down version of what you are hoping to get in the future.
In short, mocks are best utilized when you are waiting for others and need something there in order to finish your work.
You should try to always return a value if possible. Sometimes you run into problems where you are already returning something, but in C and C++ you can have output parameters and then use the return value for error checking.