I'll start my question with short example:
SomeResult DoSomething(input)
{
var a = svc1.getA(input);
if (condition with a)
{
var b = svc2.getB(a);
if (cond with b)
{
var c = svc3.getC(b);
if (cond with c)
{
}
else
{
}
}
else
{
}
}
else
{
}
}
I believe the idea is clear here. We have complex branching logic where conditions depend on interim results returned by injected services.
When we want the part of cond with c we have to mock svc1 and svc2 and svc3.
To apppear at cond with b we have to mock svc1 and svc2.
Thus we replay all upper parts of execution path every time we go level deeper. Guess how it is usually done? Bingo, copy-paste!
We have bunches of unit tests where most of the lines are occupied by objects' (a,b,c...) initialization and services mocking. When a, b or c are objects with tens of properties all this looks like a real hell. Tiny change in cond with a can easily break 20 tests simultaneously.
I insist on having some notion of "jump strait to the place I want to test".
What if we changed the code like that:
SomeResult DoSomething(input)
{
var a = svc1.getA(input);
if (condition with a)
{
var b = svc2.getB(a);
if (cond with b)
{
ProcessBLikeThis(b);
}
else
{
}
}
else
{
}
}
Then we could test ProcessBLikeThis separately from unrelated logic.
Yet for it to be testable it must be public. Moreover, as we want to have tests verifying that ProcessBLikeThis was called with the given argument depending on cond with b we either need to use isolator or make ProcessBLikeThis to be a method of some interface.
However, there is no other necessity for such granular design besides DRY-adherent testability.
So I'd appreciate some guidance here how to design and test such methods.
Addition:
I also forget to mention that my teammates are strongly against putting initialization logic in reusable methods as they see no strict border line between what can be put there and what can not and expect that some day someone will extend the code and break tests logic. They prefer copy paste as a mean of isolation.
my teammates are strongly against putting initialization logic in reusable methods as they see no strict borderline between what can be put there and what can not and expect that some day someone will extend the code and break test logic. They prefer copy / paste as a mean of isolation.
If your team wants to repeat themselves and you don't want them to, then that discussion needs to take place and a consensus formed so that everybody works with the same goals in mind. There are arguments for repeating some test setup code, usually down to readability, however, this can usually be overcome by naming any methods and variables sensibly so that their usage is obvious.
The argument that tests can't reuse code because somebody might change the shared code and break tests is a bit of a null argument. If somebody did change the shared logic and a bunch of tests didn't break you would have a larger problem. As you've said, the more likely scenario is that a small change in the production code will result in a bunch of tests failing. If the tests don't share relevant setup code, then the fix is likely to be blindly copy/pasted into each of the tests to make them work.
That said, the usual way to simplify testing is to create a different level of indirection so you're testing less. With the code you've posted, one approach might be to separate the flow logic from the action logic.
You might end up with code something like this (the names would obviously need to be tailored to your situation):
interface ISomeActioner {
bool IsTriggered( SomeStateProvider state);
SomeResult TriggeredAction(SomeStateProvider state);
SomeResult UntriggeredAction(SomeStateProvider state);
}
SomeResult DoSomething(input) {
SomeResult result = Unknown;
foreach(var actioner in _someActions) {
if(IsTriggered(/* some state provider */)) {
result = actioner.TriggeredAction(/* some state provider */);
} else {
result = actioner.UntriggeredAction(/* some state provider */);
}
if(result != Unknown) break;
}
return result;
}
You then end up implementing several classes that implement the ISomeActioner interface. Each one of these classes is straightforward; it checks the state from the state provider and returns a flag to indicate which of its other functions should be called. These classes can be tested in isolation to make sure that each public method does what is expected, by setting up the SomeStateProvider to the appropriate state before calling each of its methods.
An ordered list of these classes would then need to be injected into the class containing the DoSomething method. This allows you to use mocked instances of the interface when testing the DomeSomething method, which effectively becomes testing of a for loop.
Related
In my application there are 2 layers in our design: APIs and Operations.
1.Operations implement the "real" logic of code, for example: Authenticating the user, Retrieving book information, Informing a user that his book has been viewed.
The same operation may be used by many APIs.
2.APIs are executed by users: they receive parameters, and then execute various operations according to the logic of the API.
For example: ViewBookAPI:
class BookApis
{
/**
* authenticateUserOperation, retreiveBookOperation, informUserBookViewOperation
* are injected to this class. (Dependency Injection)
*/
public function viewBookApi($bookId, $accessToken)
{
$internalUserId = $this->authenticateUserOperation($accessToken);
$book = $this->retrieveBookOperation($bookId, $internalUserId);
$this->informUserBookWasViewedOperation($book->getOwnerUserId(), $bookId);
return $book->getContent();
}
}
How should I test this design?
1.If I test the APIs, then I'll have to repeat the same tests for APIs which are using the same operations.
2.If I test the operations, all I have to do is to verify that an API is using the operations correctly.
But what if a wrong object is injected to an API? No test would fail then.
Thank you very much.
Your design is quite common (and rightfully so), so I'm a bit surprised that this question keeps coming up.
There are two types of tests you need here:
Integration tests - make sure that the flow that starts with the API call and ends with the Operations layer doing its job, is working correctly
Unit tests - test each of the classes of the API layer, as well as of the Operations layer
Integration tests are pretty self explanatory (if not, let me know and I'll elaborate), so I'm guessing that you're referring to unit tests. The two different layers need to be tested differently.
The Operations layer:
Here you're trying to check that the classes doing the actual job are working. This means that you should instantiate the class you're testing, feed it with input, and check that the output it provides matches your expectations.
Say you have a class of this sort:
public class OperationA {
public int multiply(int x, int y) {
return x * y;
}
}
Checking that it does what you expect would mean writing a test such as (the test cases themselves are just an example, do not take too seriously):
public class OperationATest {
#Test
public void testMultiplyZeroByAnyNumberResultsInZero() {
OperationA op = new OperationA();
assertEquals(0, op.multiply(0, 0));
assertEquals(0, op.multiply(10, 0));
assertEquals(0, op.multiply(-10, 0));
...
}
#Test
public void testMultiplyNegativeByNegativeResultsInPositive() {
...
}
...
}
The API layer:
Here you're trying to check that the classes are using the right classes from the Operations layer, in the right order, doing the right operations. In order to do that, you should use mocks, and use the verify operations of mocks.
Say you have a class of this sort:
public class API_A {
private OperationA op;
public API_A(OperationA op) {
this.op = op;
}
public int multiplyTwice(int a, int b, int c) {
int x = op.multiply(a, b);
int y = op.multiply(x, c);
return y;
}
}
Checking that it does what you expect would mean (using Mockito syntax) writing a test such as:
public class API_A_Test {
#Test
public void testMultiplyTwiceMultipliesTheFirstNumberByTheSecondAndThenByTheThird() {
OperationA op = mock(OperationA.class);
when(op.multiply(12, -5)).thenReturn(60);
API_A api = new API_A(op);
api.multiply(12, -5, 0);
verify(op).multiply(12, -5);
verify(op).multiply(-60, 0);
}
}
I sense a contradiction here.
It seems you have
Operations = reusable blocks of logic
APIs = delegate to operations + script of which operations to call
From that, I'd say you'd need to test both since
* An operation itself maybe implemented correctly. Just having API tests may fail multiple clients/tests for a single bug
* An API might not be calling a required (and implemented) operation... even though the operation tests may pass.
So:
Tests for operations - validate the reusable blocks
Tests for APIs - verify logic internal to the APIs + check whether the right operations were invoked (can use mocks).You don't duplicate/test the logic inside the operations here.
I don't usually test dependency injection - because it's unlikely that someone fakes a collaborator. (Unless it's a penetration tester). You could write a test for that as well though.
Generally speaking, it seems like you need two layers of tests.
First, you need to test the basic building blocks - the operations. E.g., you should test authenticateUserOperation with a valid token, with an invalid token and with a NULL.
Once you've tested all the operations, you can move on to test the logic of the APIs. The idea here is not to double the testing code, but to test only the business logic. This can be achieved by mocking or injecting operations with known behaviors, and checking how the APIs deal with them.
For example, you could create a mock authenticateUserOperation which always fails and test that viewBookApi returns NULL (or throws an exception) when its used. This way you only test how the API handles the results of the operations without duplicating the tests.
You should perform two set of test here. Unit test and integration test
1) Unit tests - this set of test should be written for each Operations methods to make sure that particular method is working as expected. Now methods can have different path (if else /exception). You should write multiple unit tests to verify each of this path. Since its a unit test you should mock all the external calls(call's happening to other classes) which will make sure all the things you have done in that function is behaving correctly. Since you don't have any logic in api classes no need to write unit tests for them
2) Integrations Tests - write integration tests to make actual api calls. These tests will verify your end to end scenario (api, operation).
If you have any persistence/repository classes you should write separate tests for them as well.
As far as I understand your first question, there is no way you can cover this by only unit tests (like you 'tagged' it). Also maybe you must consider the usage of Pairwise testing for your test logic creation. This will support the dependency on interactions between the Operations & APIs.
Relating to wrong object is injected to an API, after DI is involved, this makes your unit testing a bit easier. Since the components are loosely coupled, you can use a dummy mock class to act like a component needed. And by doing so - your code will be very flexible. Maybe this diagram will help you:
But I don't think this is the only case that you are looking for. So I can propose you to include in your test code the Decorator design pattern and by it's concept attach additional responsibilities to the test object dynamically for the API-Operations relation.
Consider you have the following method:
public Foo ParseMe(string filepath)
{
// break up filename
// validate filename & extension
// retrieve info from file if it's a certain type
// some other general things you could do, etc
var myInfo = GetFooInfo(filename);
// create new object based on this data returned AND data in this method
}
Currently I have unit tests for GetFooInfo, but I think I also need to build unit tests for ParseMe. In a situation like this where you have a two methods that return two different properties - and a change in either of them could break something - should unit tests be created for both to determine the output is as expected?
I like to err on the side of caution and be more wary about things breaking and ensuring that maintenance later on down the road is easier, but I feel very skeptical about adding very similar tests in the test project. Would this be bad practice or is there any way to do this more efficiently?
I'm marking this as language agnostic, but just in case it matters I am using C# and NUnit - Also, I saw a post similar to this in title only, but the question is different. Sorry if this has already been asked.
ParseMe looks sufficiently non-trivial to require a unit test. To answer your precise question, if "you have a two methods that return two different properties - and a change in either of them could break something" you should absolutely unit test them.
Even if the bulk of the work is in GetFooInfo, at minimum you should test that it's actually called. I know nothing about NUnit, but I know in other frameworks (like RSpec) you can write tests like GetFooInfo.should be_called(:once).
It is not a bad practice to test a method that is calling another method. In fact, it is a good practice. If you have a method calling another method, it is probably performing additional functionality, which should be tested.
If you find yourself unit testing a method that calls a method that is also being unit tested, then you are probably experiencing code reuse, which is a good thing.
I agree with #tsm - absolutely test both methods (assuming both are public).
This may be a smell that the method or class is doing too much - violating the Single Responsibility Principle. Consider doing an Extract Class refactoring and decoupling the two classes (possibly with Dependency Injection). That way you could test both pieces of functionality independently. (That said, I'd only do that if the functionality was sufficiently complex to warrant it. It's a judgment call.)
Here's an example in C#:
public interface IFooFileInfoProvider
{
FooInfo GetFooInfo(string filename);
}
public class Parser
{
private readonly IFooFileInfoProvider _fooFileInfoProvider;
public Parser(IFooFileInfoProvider fooFileInfoProvider)
{
// Add a null check
_fooFileInfoProvider = fooFileInfoProvider;
}
public Foo ParseMe(string filepath)
{
string filename = Path.GetFileName(filepath);
var myInfo = _fooFileInfoProvider.GetFooInfo(filename);
return new Foo(myInfo);
}
}
public class FooFileInfoProvider : IFooFileInfoProvider
{
public FooInfo GetFooInfo(string filename)
{
// Do I/O
return new FooInfo(); // parameters...
}
}
Many developers, me included, take a programming by contract approach. That requires you to consider each method as a black box. If the method delegates to another method to accomplish its task does not matter, when you are testing the method. But you should also test all large or complicated parts of your program as units. So whether you need to unit test the GetFooInfo depends on how complicated that method is.
When unit testing a codebase, what are the tell-tale signs that I need to utilise mock objects?
Would this be as simple as seeing a lot of calls to other objects in the codebase?
Also, how would I unit test methods which don't return values? So if I a method is returning void but prints to a file, do I just check the file's contents?
Mocking is for external dependencies, so that's literally everything, no? File system, db, network, etc...
If anything, I probably over use mocks.
Whenever a class makes a call to another, generally I mock that call out, and I verify that the call was made with the correct parameters. Else where, I'll have a unit test that checks the concrete code of the mocked out object behaves correctly.
Example:
[Test]
public void FooMoo_callsBarBaz_whenXisGreaterThan5()
{
int TEST_DATA = 6;
var bar = new Mock<Bar>();
bar.Setup(x => x.Baz(It.Is<int>(i == TEST_DATA)))
.Verifiable();
var foo = new Foo(bar.Object);
foo.moo(TEST_DATA);
bar.Verify();
}
...
[Test]
public void BarBaz_doesSomething_whenCalled()
{
// another test
}
The thing for me is, if I try to test lots of classes as one big glob, then there's usually tonnes of setup code. Not only is this quite confusing to read as you try to get your head around all the dependencies, it's very brittle when changes need to be made.
I much prefer small succinct tests. Easier to write, easier to maintain, easier to understand the intent of the test.
Mocks/stubs/fakes/test doubles/etc. are fine in unit tests, and permit testing the class/system under test in isolation. Integration tests might not use any mocks; they actually hit the database or other external dependency.
You use a mock or a stub when you have to. Generally this is because the class you're trying to test has a dependency on an interface. For TDD you want to program to interfaces, not implementations, and use dependency injection (generally speaking).
A very simple case:
public class ClassToTest
{
public ClassToTest(IDependency dependency)
{
_dependency = dependency;
}
public bool MethodToTest()
{
return _dependency.DoSomething();
}
}
IDependency is an interface, possibly one with expensive calls (database access, web service calls, etc.). A test method might contain code similar to:
// Arrange
var mock = new Mock<IDependency>();
mock.Setup(x => x.DoSomething()).Returns(true);
var systemUnderTest = new ClassToTest(mock.Object);
// Act
bool result = systemUnderTest.MethodToTest();
// Assert
Assert.That(result, Is.True);
Note that I'm doing state testing (as #Finglas suggested), and I'm only asserting against the system under test (the instance of the class I'm testing). I might check property values (state) or the return value of a method, as this case shows.
I recommend reading The Art of Unit Testing, especially if you're using .NET.
Unit tests are only for one piece of code that works autonomously within itself. This means that it doesn't depend on other objects to do its work. You should use mocks if you are doing Test-Driven programming or Test-First programming. You would create a mock (or stub as I like to call it) of the function you will be creating and set certain conditions for the test to pass. Originally the function returns false and the test fails, which is expected ... then you write the code to do the real work until it passes.
But what I think you are referring to is integration testing, not unit testing. In that case, you should use mocks if you are waiting for other programmers to finish their work and you currently don't have access to the functions or objects they are creating. If you know the interface, which hopefully you do otherwise mocking is pointless and a waste of time, then you can create a dumbed-down version of what you are hoping to get in the future.
In short, mocks are best utilized when you are waiting for others and need something there in order to finish your work.
You should try to always return a value if possible. Sometimes you run into problems where you are already returning something, but in C and C++ you can have output parameters and then use the return value for error checking.
Consider the following class:
public class MyIntSet
{
private List<int> _list = new List<int>();
public void Add(int num)
{
if (!_list.Contains(num))
_list.Add(num);
}
public bool Contains(int num)
{
return _list.Contains(num);
}
}
Following the "only test one thing" principle, suppose I want to test the "Add" function.
Consider the following possibility for such a test:
[TestClass]
public class MyIntSetTests
{
[TestMethod]
public void Add_AddOneNumber_SetContainsAddedNumber()
{
MyIntSet set = new MyIntSet();
int num = 0;
set.Add(num);
Assert.IsTrue(set.Contains(num));
}
}
My problem with this solution is that it actually tests 2 methods: Add() and Contains().
Theoretically, there could be a bug in both, that only manifests in scenarios where they are not called one after the other. Of course, Contains() now servers as a thin wrapper for List's Contains() which shouldn't be tested in itself, but what if it changes to something more complex in the future? Perhaps a simple "thin wrap" method should always be kept for testing purposes ?
An alternative approach might suggest mocking out or exposing (possibly using InternalsVisibleTo or PrivateObject) the private _list member and have the test inspect it directly, but that could potentially create test maintainability problems if someday the internal list is replaced by some other collection (maybe C5).
Is there a better way to do this?
Are any of my arguments against the above implementations flawed?
Thanks in advance,
JC
Your test seems perfectly OK to me. You may have misunderstood a principle of unit testing.
A single test should (ideally) only test one thing, that is true, but that does not mean that it should test only one method; rather it should only test one behaviour (an invariant, adherence to a certain business rule, etc.) .
Your test tests the behaviour "if you add to a new set, it is no longer empty", which is a single behaviour :-).
To address your other points:
Theoretically, there could be a bug in both, that only manifests in scenarios where they are not called one after the other.
True, but that just means you need more tests :-). For example, add two numbers, then call Contains, or call Contains without Add.
An alternative approach might suggest mocking out or exposing (possibly using InternalsVisibleTo) the private _list member and have the test inspect it directly, but that could potentially create test maintainability problems[...]
Very true, so don't do this. A unit test should always be against the public interface of the unit under test. That's why it's called a unit test, and not a "messing around inside a unit"-test ;-).
There are two possibilities.
You've exposed a flaw in your design. You should carefully consider if the actions that your Add method is executing is clear to the consumer. If you don't want people adding duplicates to the list, why even have a Contains() method? The user is going to be confused when it's not added to the list and no error is thrown. Even worse, they might duplicate the functionality by writing the exact same code before they call .Add() on their list collection. Perhaps it should be removed, and replaced with an indexer? It's not clear from your list class that it's not meant to hold duplicates.
The design is fine, and your public methods should rely on each other. This is normal, and there is no reason you can't test both methods. The more test cases you have, theoretically the better.
As an example, say you have a functions that just calls down into other layers, which may already be unit tested. That doesn't mean you don't write unit tests for the function even if it's simply a wrapper.
In practice, your current test is fine. For something this simple it's very unlikely that bugs in add() and contains() would mutually conspire to hide each other. In cases where you are really concerned about testing add() and add() alone, one solution is to make your _list variable available to your unit test code.
[TestClass]
public void Add_AddOneNumber_SetContainsAddedNumber() {
MyIntSet set = new MyIntSet();
set.add(0);
Assert.IsTrue(set._list.Contains(0));
}
Doing this has two drawbacks. One: it requires access to the private _list variable, which is a little complex in C# (I recommend the reflection technique). Two: it makes your test code dependent on the actual implementation of your Set implementation, which means you'll have to modify the test if you ever change the implementation. I'd never do this for something as simple as a collections class, but in some cases it may be useful.
Suppose you have a method:
public void Save(Entity data)
{
this.repositoryIocInstance.EntitySave(data);
}
Would you write a unit test at all?
public void TestSave()
{
// arrange
Mock<EntityRepository> repo = new Mock<EntityRepository>();
repo.Setup(m => m.EntitySave(It.IsAny<Entity>());
// act
MyClass c = new MyClass(repo.Object);
c.Save(new Entity());
// assert
repo.Verify(m => EntitySave(It.IsAny<Entity>()), Times.Once());
}
Because later on if you do change method's implementation to do more "complex" stuff like:
public void Save(Entity data)
{
if (this.repositoryIocInstance.Exists(data))
{
this.repositoryIocInstance.Update(data);
}
else
{
this.repositoryIocInstance.Create(data);
}
}
...your unit test would fail but it probably wouldn't break your application...
Question
Should I even bother creating unit tests on methods that don't have any return types* or **don't change anything outside of internal mock?
Don't forget that unit tests isn't just about testing code. It's about allowing you to determine when behaviour changes.
So you may have something that's trivial. However, your implementation changes and you may have a side effect. You want your regression test suite to tell you.
e.g. Often people say you shouldn't test setters/getters since they're trivial. I disagree, not because they're complicated methods, but someone may inadvertently change them through ignorance, fat-finger scenarios etc.
Given all that I've just said, I would definitely implement tests for the above (via mocking, and/or perhaps it's worth designing your classes with testability in mind and having them report status etc.)
It's true your test is depending on your implementation, which is something you should avoid (though it is not really that simple sometimes...) and is not necessarily bad. But these kind of tests are expected to break even if your change doesn't break the code.
You could have many approaches to this:
Create a test that really goes to the database and check if the state was changed as expected (it won't be a unit test anymore)
Create a test object that fakes a database and do operations in-memory (another implementation for your repositoryIocInstance), and verify the state was changed as expected. Changes to the repository interface would incurr in changes to this object as well. But your interfaces shouldn't be changing much, right?
See all of this as too expensive, and use your approach, which may incur on unnecessarily breaking tests later (but once the chance is low, it is ok to take the risk)
Ask yourself two questions. "What is the manual equivalent of this unit test?" and "is it worth automating?". In your case it would be something like:
What is manual equivalent?
- start debugger
- step into "Save" method
- step into next, make sure you're inside IRepository.EntitySave implementation
Is it worth automating? My answer is "no". It is 100% obvious from the code.
From hundreds of similar waste tests I didn't see a single which would turn out to be useful.
The general rule of thumb is, that you test all things, that could probably break. If you are sure, that the method is simple enough (and stays simple enough) to not be a problem, that let it out with testing.
The second thing is, you should test the contract of the method, not the implementation. If the test fails after a change, but not the application, then your test tests not the right thing. The test should cover cases that are important for your application. This should ensure, that every change to the method that doesn't break the application also don't fail the test.
A method that does not return any result still changes the state of your application. Your unit test, in this case, should be testing whether the new state is as intended.
"your unit test would fail but it probably wouldn't break your application"
This is -- actually -- really important to know. It may seem annoying and trivial, but when someone else starts maintaining your code, they may have made a really bad change to Save and (improbably) broken the application.
The trick is to prioritize.
Test the important stuff first. When things are slow, add tests for trivial stuff.
When there isn't an assertion in a method, you are essentially asserting that exceptions aren't thrown.
I'm also struggling with the question of how to test public void myMethod(). I guess if you do decide to add a return value for testability, the return value should represent all salient facts necessary to see what changed about the state of the application.
public void myMethod()
becomes
public ComplexObject myMethod() {
DoLotsOfSideEffects()
return new ComplexObject { rows changed, primary key, value of each column, etc };
}
and not
public bool myMethod()
DoLotsOfSideEffects()
return true;
The short answer to your question is: Yes, you should definitely test methods like that.
I assume that it is important that the Save method actually saves the data. If you don't write a unit test for this, then how do you know?
Someone else may come along and remove that line of code that invokes the EntitySave method, and none of the unit tests will fail. Later on, you are wondering why items are never persisted...
In your method, you could say that anyone deleting that line would only be doing so if they have malign intentions, but the thing is: Simple things don't necessarily stay simple, and you better write the unit tests before things get complicated.
It is not an implementation detail that the Save method invokes EntitySave on the Repository - it is part of the expected behavior, and a pretty crucial part, if I may say so. You want to make sure that data is actually being saved.
Just because a method does not return a value doesn't mean that it isn't worth testing. In general, if you observe good Command/Query Separation (CQS), any void method should be expected to change the state of something.
Sometimes that something is the class itself, but other times, it may be the state of something else. In this case, it changes the state of the Repository, and that is what you should be testing.
This is called testing Inderect Outputs, instead of the more normal Direct Outputs (return values).
The trick is to write unit tests so that they don't break too often. When using Mocks, it is easy to accidentally write Overspecified Tests, which is why most Dynamic Mocks (like Moq) defaults to Stub mode, where it doesn't really matter how many times you invoke a given method.
All this, and much more, is explained in the excellent xUnit Test Patterns.