From Wikipedia (emphasis mine, internal references removed):
In the book "The Art of Unit Testing" mocks are described as a fake object that helps decide whether a test failed or passed by verifying whether an interaction with an object occurred.
It seems to me that mocks are testing implementation. Specifically, they test that the way that the implementation interacted with a particular object.
Am I interpreting this correctly? Are mocks an intentional breaking of the "test the interface, not the implementation" mantra? Or, are mocks testing at a level other than unit tests?
Correct, mocks do not follow the classicist mantra of "test the interface, not the implementation". Instead of state verification, mocks use behavior verification.
From http://martinfowler.com/articles/mocksArentStubs.html:
Mocks use behavior verification.
Mockist tests are thus more coupled to the implementation of a method. Changing the nature of calls to collaborators usually cause a mockist test to break.
This coupling leads to a couple of concerns. The most important one is the effect on Test Driven Development. With mockist testing, writing the test makes you think about the implementation of the behavior - indeed mockist testers see this as an advantage. Classicists, however, think that it's important to only think about what happens from the external interface and to leave all consideration of implementation until after you're done writing the test.
It seems to me that mocks are testing implementation.
Specifically, they test that the way that the implementation interacted with a particular object.
100% correct. However, this is still Unit testing, just from a different perspective.
Let's say you have a method that's supposed to perform a function on 2 numbers using some kind of MathsService. MathsService is given to a Calculator class in order to do the math for the calculator.
Let's pretend MathsService has a single method, PerformFunction(int x, int y) that is supposed to just return x+y.
Testing like this: (all below = pseudo code, with certain bits left out for clarity)
var svc = new MathsService();
var sut = new Calculator(svc);
int expected = 3;
int actual = sut.Calculate(1,2);
Assert.AreEqual(expected,actual,"Doh!");
That's a black box test of the unit Calculator.Calculate(). Your test doesn't know or care how the answer was arrived at. It's important because it gives you a certain level of confidence that your test works correctly.
However, consider this implementation of Calculator.Calculate:
public int Calculate()
{
return 4;
}
Testing like this:
var svc = new Mock<IMathsService>(); //did I mention there was an interface? There's an interface...
svc.Setup(s=>PerformCalculation(1,2)).Returns(3);
var sut new Calculator(svc.Object);
sut.Calculate(1,2);
svc.Verify(s=>PerformCalculation(1,2),Times.Once,"Calculate() didn't invoke PerformFunction");
This white box test doesn't tell you anything about the correctness of the PerformFunction method, but it does prove that, regardless of the result, the Calculator did pass x and y to the IAdditionService.PerformCalculation method, which is what you want it to do.
You can of course write other tests to verify that the result of the PerformCalculation test is passed back without modification to the caller, etc.
Armed with this knowledge, if your first unit test now fails, you can, with a high degree of confidence, jump right into the MathService class to look for problems because you know that the issue is likely not the Calculator.Calculate method.
Hope that helps...
Related
I'm sorry, it's a very long post. I've read almost everything on the subject, and I'm not yet convinced that it's a bad idea to partially mock the SUT to just DRY up the tests. So, I need to first address all the reasonings against it in order to avoid repetitive answers. Please bear with me.
Have you ever felt the urge to partially mock out the SUT itself, in order to make tests more DRY? Less mocks, less madness, more readable tests?!
Let's have an example at hand to discuss the subject more clearly:
class Sut
{
public function fn0(...)
{
// Interacts with: Dp0
}
public function fn1(...)
{
// Calls: fn0
// Interacts with: Dp1
}
public function fn2(...)
{
// Calls: fn0
// Interacts with: Dp1
}
public function fn3(...)
{
// Calls: fn2
}
public function fn4(...)
{
// Calls: fn1(), fn2(), fn3()
// Interacts with: Dp2
}
}
Now, let's test the behavior of the SUT. Each fn*() is representing a behavior of the class under test. Here, I'm not trying to unit-test each single method of the SUT, but its exposed behaviors.
class SutTest extends \PHPUnit_Framework_Testcase
{
/**
* #covers Sut::fn0
*/
public function testFn0()
{
// Mock Dp0
}
/**
* #covers Sut::fn1
*/
public function testFn1()
{
// Mock Dp1, which is a direct dependency
// Mock Dp0, which is an indirect dependency
}
/**
* #covers Sut::fn2
*/
public function testFn2()
{
// Mock Dp1 with different expectations than testFn1()
// Mock Dp0 with different expectations
}
/**
* #covers Sut::fn3
*/
public function testFn3()
{
// Mock Dp1, again with different expectations
// Mock Dp0, with different expectations
}
/**
* #covers Sut::fn4
*/
public function testFn4()
{
// Mock Dp2 which is a direct dependency
// Mock Dp0, Dp1 as indirect dependencies
}
}
You get the terrible idea! You need to keep repeating yourself. It's not DRY at all. And as the expectations of each mock object may differ for each test, you can't just mock out all the dependencies and set expectations once for the whole testcase. You need to be explicit about the behavior of the mock for each test.
Let's also have some real code to see what it would look like when a test needs to mock out all dependencies through the code path it's testing:
/** #test */
public function dispatchesActionsOnACollectionOfElementsFoundByALocator()
{
$elementMock = $this->mock(RemoteWebElement::class)
->shouldReceive('sendKeys')
->times(3)
->with($text = 'some text...')
->andReturn(Mockery::self())
->mock();
$this->inject(RemoteWebDriver::class)
->shouldReceive('findElements')
->with(WebDriverBy::class)
->andReturn([$elementMock, $elementMock, $elementMock])
->shouldReceive('getCurrentURL')
->zeroOrMoreTimes();
$this->inject(WebDriverBy::class)
->shouldReceive('xpath')
->once()
->with($locator = 'someLocatorToMatchMultipleElements')
->andReturn(Mockery::self());
$this->inject(Locator::class)
->shouldReceive('isLocator', 'isXpath')
->andReturn(true, false);
$this->type($text, $locator);
}
Madness! In order to test a small method, you find yourself writing such a test that is extremely non-readable and coupled to the implementation details of 3 or 4 other dependant methods in its way up the chain. It's even more terrible when you see the whole testcase; lot's of those mock blocks, duplicated to set different expectations, covering different paths of code. The test is mirroring the implementation details of few others. It's painful.
Workaround
OK, back to the first pseudo-code; While testing fn3(), you start thinking, what if I could mock the call to fn2() and put a stop to all the mocking madness? I make a partial mock of the SUT, set expectations for the fn2() and I make sure that the method under test interacts with fn2(), correctly.
In other words, to avoid excessive mocking of external dependencies, I focus on just one behavior of the SUT (might be one or couple of methods) and make sure it behaves correctly. I mockout all other methods that belong to other behaviors of the SUT. No worries about them, they all have their own tests.
Contrary reasonings
One might discuss that:
The problem with stubbing/mocking methods in a class is that you are violating the encapsulation. Your test should be checking to see whether or not the external behaviour of the object matches the specifications. Whatever happens inside the object is none of its business. By mocking the public methods in order to test the object, you are making an assumption about how that object is implemented.
When unit-testing, it's rare that you always deal with such behaviors that are fully testable by providing input and expecting output; seeing them as black boxes. Most of the time you need to test how they interact with one another. So, we need to have, at least, some information about the internal implementation of the SUT, to be able to fully test it.
When we mock a dependency, we're ALREADY making an assumption about how the SUT works. We're binding the test to the implementation details of the SUT. So, now that we're deep in mud, why not mocking an internal method to make our lives easier?!
Some may say:
Mocking the methods is taking the object (SUT) and breaking it into two pieces. One piece is being mocked while the other piece is being tested. What you are doing is essentially an ad-hoc breaking up of the object. If that's the case, just break up the object already.
Unit tests should treat the classes they test as black boxes. The only thing which matters is that its public methods behave the way it is expected. How the class achieves this through internal state and private methods does not matter. When you feel that it is impossible to create meaningful tests this way, it is a sign that your classes are too powerful and do too much. You should consider to move some of their functionality to separate classes which can be tested separately.
If there are arguments that support this separation (partially mock the SUT), the same arguments can be used to refactor the class into two classes and this is exactly what you should to then.
If it's a smell for SRP, yeah right, the functionality can be extracted into another class and then you can easily mock that class and go happily home. But it's not the case. The SUT's design is ok, it has no SRP issues, it's small, it does one job and ad-heres to SOLID principles. When looking at SUT's code, there's no reason you want to break the functionality into some other classes. It's already broken into very fine pieces.
How come when you look at SUT tests, you decide to break the class? How come it's ok to mock out aaaaall those dependencies along the way, when testing fn3() but it's not ok to mock the only real dependency it has (even though it's an internal one)? fn2(). Either way, we're bound to the implementation details of the SUT. Either way, the tests are fragile.
It's important to notice why we want to mock those methods. We just want easier testing, less mocks, while maintaining the absolute isolation of the SUT (more on this later).
Some other might reason:
The way I see it, an object has external and internal behaviors. External behavior includes returns values, calls into other objects, etc. Obviously, anything in that category should be tested. But internal behavior shouldn't really be tested. I don't write tests directly on the internal behavior, only indirectly through the external behavior.
Right, I do that, too. But we're not testing the internals of SUT, we're just exercising its public API and we want to avoid excessive mocking.
Reasoning states that, External behavior includes calls into other objects; I agree. We're tryinng to test the SUT's external calls here as well, just by early-mocking the internal method that makes the interaction. That mocked method(s) has the its tests already.
One other reasons:
Too many mocks and already perfectly broken into multiple classes? You are over-unit-testing by unit-testing what should be integration tested.
The example code has just 3 external dependencies and I don't think that it's too much. Again, it's important to notice why I want to partially mock the SUT; only and only for easier testing, avoiding excessive mocks.
By the way, the reasoning might be true somehow. I might need to do integration testing in some cases. More on this in the next section.
The last one says:
These are all tests man, not production code, they don't need to be DRY!
I've actually read something like this! And I simply don't think so. I need to put my lifetime into use! You, too!
Bottom-line: To mock, or not to mock?
When we choose to mock, we're writing whitebox unit-tests. We're bounding tests to the implementation details of the SUT, more or less. Then, if we decide to go down the PURE way, and radically maintain the isolation of the SUT, soon or later, we find ourselves in the hell of mocking madness... and fragile tests. Ten months into the maintenance, you found yourself serving unit-tests instead of them serving you! You find yourself re-implementing multiple tests for a single change in the implementation of one SUT method. Painful right?
So, if we're going this way, why not partially mock the SUT? Why not making our lives waaaaaaaaay easier? I see no reason not to do so? Do you?
I've read and read, and finally came across this article by Uncle Bob:
https://8thlight.com/blog/uncle-bob/2014/05/10/WhenToMock.html
To qoute the most important part:
Mock across architecturally significant boundaries, but not within those boundaries.
I think it's the remedy to all the mocking madness I told you about. There's no need to radically maintain the isolation of the SUT as I've learnt blindly. Even though it may work for most of the time, it also may force you to live in your private mocking hell, banging your head against the wall.
This little gem piece of advice, is the only reasoning that makes sense to not to partially mock the SUT. In fact, it's the exact opposite of doing so. But now the question would be, isn't that integration testing? Is that still called unit-testing? What's the UNIT here? Architecturally significant boundaries?
Here's another article by Google Testing team, implicitly suggesting the same practice:
https://testing.googleblog.com/2013/05/testing-on-toilet-dont-overuse-mocks.html
To recap
If we're going down the pure isolation way, assuming that the SUT is already broken into fine pieces, with minimum possible external deps, is there any reason not to partially mock the SUT? In order to avoid the excessive mocking and to make unit-tests more DRY?
If we take the advice of Uncle Bob to the heart and only "Mock across architecturally significant boundaries, but not within those boundaries.", is that still considered unit testing? What's the unit here?
Thank you for reading.
P.S. Those contrary reasonings are more or less from existing SO answers or articles I found on the subject. Unfortunately, I don't currently have the refs to link to.
Unit tests don't have to be isolated unit tests, at least if you accept the definition promoted by authors like Martin Fowler and Kent Beck. Kent is the creator of JUnit, and probably the main proponent of TDD. These guys don't do mocking.
In my own experience (as long-time developer of an advanced Java mocking library), I see programmers misusing and abusing mocking APIs all the time. In particular, when they think that partially mocking the SUT is a valid idea. It is not. Making test code more DRY shouldn't be an excuse for over-mocking.
Personally, I favor integration tests with minimal or (preferably) no mocking. As long as your tests are stable and run fast enough, they are ok. The important thing is that tests don't become a pain to write, maintain, and more importantly don't discourage the programmers from running them. (This is why I avoid functional UI-driven tests - they tend to be a pain to run.)
I'm looking to better understand I should test functions that have many substeps or subfunctions.
Let's say I have the functions
// Modify the state of class somehow
public void DoSomething(){
DoSomethingA();
DoSomethingB();
DoSomethingC();
}
Every function here is public. Each subfunction has 2 paths. So to test every path for DoSomething() I'd have 2*2*2 = 8 tests. By writing 8 tests for DoSomething() I will have indirectly tested the subfunctions too.
So should I be testing like this, or instead write unit tests for each of the subfunctions and then only write 1 test case that measures the final state of the class after DoSomething() and ignore all the possible paths? A total of 2+2+2+1 = 7 tests. But is it bad then that the DoSomething() test case will depend on the other unit test cases to have complete coverage?
There appears to be a very prevalent religious belief that testing should be unit testing. While I do not intend to underestimate the usefulness of unit testing, I would like to point out that it is just one possible flavor of testing, and its extensive (or even exclusive) use is indicative of people (or environments) that are somewhat insecure about what they are doing.
In my experience knowledge of the inner workings of a system is useful as a hint for testing, but not as an instrument for testing. Therefore, black box testing is far more useful in most cases, though that's admittedly in part because I do not happen to be insecure about what I am doing. (And that is in turn because I use assertions extensively, so essentially all of my code is constantly testing itself.)
Without knowing the specifics of your case, I would say that in general, the fact that DoSomething() works by invoking DoSomethingA() and then DoSomethingB() and then DoSomethingC() is an implementation detail that your black-box test should best be unaware of. So, I would definitely not test that DoSomething() invokes DoSomethingA(), DoSomethingB(), and DoSomethingC(), I would only test to make sure that it returns the right results, and using the knowledge that it does in fact invoke those three functions as a hint I would implement precisely those 7 tests that you were planning to use.
On the other hand, it should be noted that if DoSomethingA() and DoSomethingB() and DoSomethingC() are also public functions, then you should also test them individually, too.
Definitely test every subfunction seperately (because they're public).
It would help you find the problem if one pops up.
If DoSomething only uses other functions, I wouldn't bother writing additional tests for it. If it has some other logic, I would test it, but assume all functions inside work properly (if they're in a different class, mock them).
The point is finding what the function does that is not covered in other tests and testing that.
Indirect testing should be avoided. You should write unit tests for each function explicitly. After that You should mock submethods and test your main function. For example :
You have a method which inserts a user to DB and method is like this :
void InsertUser(User user){
var exists = SomeExternal.UserExists(user);
if(exists)
throw new Exception("bla bla bla");
//Insert codes here
}
If you want to test InsertUser function, you should mock external/sub/nested methods and test behaviour of InsertUser function.
This example creates two tests: 1 - "When user exists then Should throw Exception" 2 - "When user does not exist then Should insert user"
I have just started to read Professional Test Driven Development with C#: Developing Real World Applications with TDD
I have a hard time understanding stubs, fakes and mocks. From what I understand so far, they are fake objects used for the purpose of unit testing your projects, and that a mock is a stub with conditional logic into it.
Another thing I think I have picked up is that mocks are somehow related with dependency injection, a concept which I only managed to understand yesterday.
What I do not get is why I would actually use them. I cannot seem to find any concrete examples online that explains them properly.
Can anyone please explain to me this concepts?
As I've read in the past, here's what I believe each term stands for
Stub
Here you are stubbing the result of a method to a known value, just to let the code run without issues. For example, let's say you had the following:
public int CalculateDiskSize(string networkShareName)
{
// This method does things on a network drive.
}
You don't care what the return value of this method is, it's not relevant. Plus it could cause an exception when executed if the network drive is not available. So you stub the result in order to avoid potential execution issues with the method.
So you end up doing something like:
sut.WhenCalled(() => sut.CalculateDiskSize()).Returns(10);
Fake
With a fake you are returning fake data, or creating a fake instance of an object. A classic example are repository classes. Take this method:
public int CalculateTotalSalary(IList<Employee> employees) { }
Normally the above method would be passed a collection of employees that were read from a database. However in your unit tests you don't want to access a database. So you create a fake employees list:
IList<Employee> fakeEmployees = new List<Employee>();
You can then add items to fakeEmployees and assert the expected results, in this case the total salary.
Mocks
When using mock objects you intend to verify some behaviour, or data, on those mock objects. Example:
You want to verify that a specific method was executed during a test run, here's a generic example using Moq mocking framework:
public void Test()
{
// Arrange.
var mock = new Mock<ISomething>();
mock.Expect(m => m.MethodToCheckIfCalled()).Verifiable();
var sut = new ThingToTest();
// Act.
sut.DoSomething(mock.Object);
// Assert
mock.Verify(m => m.MethodToCheckIfCalled());
}
Hopefully the above helps clarify things a bit.
EDIT:
Roy Osherove is a well-known advocate of Test Driven Development, and he has some excellent information on the topic. You may find it very useful :
http://artofunittesting.com/
They are all variations of the Test Double. Here is a very good reference that explains the differences between them: http://xunitpatterns.com/Test%20Double.html
Also, from Martin Fowler's post: http://martinfowler.com/articles/mocksArentStubs.html
Meszaros uses the term Test Double as the generic term for any kind of
pretend object used in place of a real object for testing purposes.
The name comes from the notion of a Stunt Double in movies. (One of
his aims was to avoid using any name that was already widely used.)
Meszaros then defined four particular kinds of double:
Dummy objects: are passed around but never actually used. Usually they
are just used to fill parameter lists.
Fake objects actually have working implementations, but usually take some shortcut which makes
them not suitable for production (an in memory database is a good
example).
Stubs provide canned answers to calls made during the test,
usually not responding at all to anything outside what's programmed in
for the test. Stubs may also record information about calls, such as
an email gateway stub that remembers the messages it 'sent', or maybe
only how many messages it 'sent'.
Mocks are what we are talking about here: objects pre-programmed with expectations which form a
specification of the calls they are expected to receive.
Of these kinds of doubles, only mocks insist upon behavior verification. The
other doubles can, and usually do, use state verification. Mocks
actually do behave like other doubles during the exercise phase, as
they need to make the SUT believe it's talking with its real
collaborators.
This PHP Unit's manual helped me a lot as introduction:
"Sometimes it is just plain hard to test the system under test (SUT) because it depends on other components that cannot be used in the test environment. This could be because they aren't available, they will not return the results needed for the test or because executing them would have undesirable side effects. In other cases, our test strategy requires us to have more control or visibility of the internal behavior of the SUT." More: https://phpunit.de/manual/current/en/test-doubles.html
And i find better "introductions" when looking for "test doubles" as mocks, fakes, stubs and the others are known.
If a function just calls another function or performs actions. How do I test it? Currently, I enforce all the functions should return a value so that I could assert the function return values. However, I think this approach mass up the API because in the production code. I don't need those functions to return value. Any good solutions?
I think mock object might be a possible solution. I want to know when should I use assert and when should I use mock objects? Is there any general guide line?
Thank you
Let's use BufferedStream.Flush() as an example method that doesn't return anything; how would we test this method if we had written it ourselves?
There is always some observable effect, otherwise the method would not exist. So the answer can be to test for the effect:
[Test]
public void FlushWritesToUnderlyingStream()
{
var memory = new byte[10];
var memoryStream = new MemoryStream(memory);
var buffered = new BufferedStream(memoryStream);
buffered.Write(0xFF);
Assert.AreEqual(0x00, memory[0]); // not yet flushed, memory unchanged
buffered.Flush();
Assert.AreEqual(0xFF, memory[0]); // now it has changed
}
The trick is to structure your code so that these effects aren't too hard to observe in a test:
explicitly pass collaborator objects,
just like how the memoryStream is passed
to the BufferedStream in the constructor.
This is called dependency
injection.
program against an interface, just
like how BufferedStream is programmed
against the Stream interface. This enables
you to pass simpler, test-friendly implementations (like MemoryStream in this case) or use a mocking framework (like MoQ or RhinoMocks), which is all great for unit testing.
Sorry for not answering straight but ... are you sure you have the exact balance in your testing?
I wonder if you are not testing too much ?
Do you really need to test a function that merely delegates to another?
Returns only for the tests
I agree with you when you write you don't want to add return values that are useful only for the tests, not for production. This clutters your API, making it less clear, which is a huge cost in the end.
Also, your return value could seem correct to the test, but nothing says that the implementation is returning the return value that corresponds to the implementation, so the test is probably not proving anything anyway...
Costs
Note that testing has an initial cost, the cost of writing the test.
If the implementation is very easy, the risk of failure is ridiculously low, but the time spend testing still accumulates (over hundred or thousands cases, it ends up being pretty serious).
But more than that, each time you refactor your production code, you will probably have to refactor your tests also. So the maintenance cost of your tests will be high.
Testing the implementation
Testing what a method does (what other methods it calls, etc) is critized, just like testing a private method... There are several points made:
this is fragile and costly : any code refactoring will break the tests, so this increases the maintenance cost
Testing a private method does not bring much safety to your production code, because your production code is not making that call. It's like verifying something you won't actually need.
When a code delegates effectively to another, the implementation is so simple that the risk of mistakes is very low, and the code almost never changes, so what works once (when you write it) will never break...
Yes, mock is generally the way to go, if you want to test that a certain function is called and that certain parameters are passed in.
Here's how to do it in Typemock (C#):
Isolate.Verify.WasCalledWithAnyArguments(()=> myInstance.WeatherService("","", null,0));
Isolate.Verify.WasCalledWithExactArguments(()=> myInstance. StockQuote("","", null,0));
In general, you should use Assert as much as possible, until when you can't have it ( For example, when you have to test whether you call an external Web service API properly, in this case you can't/ don't want to communicate with the web service directly). In this case you use mock to verify that a certain web service method is correctly called with correct parameters.
"I want to know when should I use assert and when should I use mock objects? Is there any general guide line?"
There's an absolute, fixed and important rule.
Your tests must contain assert. The presence of assert is what you use to see if the test passed or failed. A test is a method that calls the "component under test" (a function, an object, whatever) in a specific fixture, and makes specific assertions about the component's behavior.
A test asserts something about the component being tested. Every test must have an assert, or it isn't a test. If it doesn't have assert, it's not clear what you're doing.
A mock is a replacement for a component to simplify the test configuration. It is a "mock" or "imitation" or "false" component that replaces a real component. You use mocks to replace something and simplify your testing.
Let's say you're going to test function a. And function a calls function b.
The tests for function a must have an assert (or it's not a test).
The tests for a may need a mock for function b. To isolate the two functions, you test a with a mock for function b.
The tests for function b must have an assert (or it's not a test).
The tests for b may not need anything mocked. Or, perhaps b makes an OS API call. This may need to be mocked. Or perhaps b writes to a file. This may need to be mocked.
I'm basically trying to teach myself how to code and I want to follow good practices. There are obvious benefits to unit testing. There is also much zealotry when it comes to unit-testing and I prefer a much more pragmatic approach to coding and life in general. As context, I'm currently writing my first "real" application which is the ubiquitous blog engine using asp.net MVC. I'm loosely following the MVC Storefront architecture with my own adjustments. As such, this is my first real foray into mocking objects. I'll put the code example at the end of the question.
I'd appreciate any insight or outside resources that I could use to increase my understanding of the fundamentals of testing and mocking. The resources I've found on the net are typically geared towards the "how" of mocking and I need more understanding of the where, why and when of mocking. If this isn't the best place to ask this question, please point me to a better place.
I'm trying to understand the value that I'm getting from the following tests. The UserService is dependent upon the IUserRepository. The value of the service layer is to separate your logic from your data storage, but in this case most of the UserService calls are just passed straight to IUserRepository. The fact that there isn't much actual logic to test could be the source of my concerns as well. I have the following concerns.
It feels like the code is just testing that the mocking framework is working.
In order to mock out the dependencies, it makes my tests have too much knowledge of the IUserRepository implementation. Is this a necessary evil?
What value am I actually gaining from these tests? Is the simplicity of the service under test causing me to doubt the value of these tests.
I'm using NUnit and Rhino.Mocks, but it should be fairly obvious what I'm trying to accomplish.
[SetUp]
public void Setup()
{
userRepo = MockRepository.GenerateMock<IUserRepository>();
userSvc = new UserService(userRepo);
theUser = new User
{
ID = null,
UserName = "http://joe.myopenid.com",
EmailAddress = "joe#joeblow.com",
DisplayName = "Joe Blow",
Website = "http://joeblow.com"
};
}
[Test]
public void UserService_can_create_a_new_user()
{
// Arrange
userRepo.Expect(repo => repo.CreateUser(theUser)).Return(true);
// Act
bool result = userSvc.CreateUser(theUser);
// Assert
userRepo.VerifyAllExpectations();
Assert.That(result, Is.True,
"UserService.CreateUser(user) failed when it should have succeeded");
}
[Test]
public void UserService_can_not_create_an_existing_user()
{
// Arrange
userRepo.Stub(repo => repo.IsExistingUser(theUser)).Return(true);
userRepo.Expect(repo => repo.CreateUser(theUser)).Return(false);
// Act
bool result = userSvc.CreateUser(theUser);
// Assert
userRepo.VerifyAllExpectations();
Assert.That(result, Is.False,
"UserService.CreateUser() allowed multiple copies of same user to be created");
}
Essentially what you are testing here is that the methods are getting called, not whether or not they actually work. Which is what mocks are supposed to do. Instead of calling the method, they just check to see if the method got called, and return whatever is in the Return() statement. So in your assertion here:
Assert.That(result, Is.False, "error message here");
This assertion will ALWAYS succeed because your expectation will ALWAYS return false, because of the Return statement:
userRepo.Expect(repo => repo.CreateUser(theUser)).Return(false);
I'm guessing this isn't that useful in this case.
Where mocking is useful is when you want to, for example, make a database call somewhere in your code, but you don't want to actually call to the database. You want to pretend that the database got called, but you want to set up some fake data for it to return, and then (here's the important part) test the logic that does something with the fake data your mock returned. In the above examples you are omitting the last step. Imagine you had a method that displayed a message to the user that said whether the new user was created:
public string displayMessage(bool userWasCreated) {
if (userWasCreated)
return "User created successfully!";
return "User already exists";
}
then your test would be
userRepo.Expect(repo => repo.CreateUser(theUser)).Return(false);
Assert.AreEqual("User already exists", displayMessage(userSvc.CreateUser(theUser)))
Now this has some value, because you are testing some actual behavior. Of course, you could also just test this directly by passing in "true" or "false." You don't even need a mock for that test. Testing expectations is fine, but I've written plenty of tests like that, and have come to the same conclusion that you are reaching - it just isn't that useful.
So in short, mocking is useful when you want to abstract away externalities, like databases, or webservice calls, etc, and inject known values at that point. But it's not often useful to test mocks directly.
You are right: the simplicity of the service makes these tests uninteresting. It is not until you get more business logic in the service, that you will gain value from the tests.
You might consider some tests like these:
CreateUser_fails_if_email_is_invalid()
CreateUser_fails_if_username_is_empty()
Another comment: it looks like a code-smell, that your methods return booleans to indicate success or failure. You might have a good reason to do it, but usually you should let the exceptions propagate out. It also makes it harder to write good tests, since you will have problems detecting whether your method failed for the "right reason", f.x. you might write the CreateUser_fails_if_email_is_invalid()-test like this:
[Test]
public void CreateUser_fails_if_email_is_invalid()
{
bool result = userSvc.CreateUser(userWithInvalidEmailAddress);
Assert.That(result, Is.False);
}
and it would probably work with your existing code. Using the TDD Red-Green-Refactor-cycle would mitigate this problem, but it would be even better to be able to actually detect that the method failed because of the invalid email, and not because of another problem.
If you write your tests before you write your code, you'll gain much more from your unit tests. One of the reasons that it feels like your tests aren't worth much is that you're not deriving the value of having your tests drive the design. Writing your tests afterwards is mostly just an exercise in seeing if you can remember everything that can go wrong. Writing your tests first causes you to think about how you would actually implement the functionality.
These tests aren't all that interesting because the functionality that is being implemented is pretty basic. The way you are going about mocking seems pretty standard -- mock the things the class under test depends on, not the class under test. Testability (or good design sense) has already led you to implement interfaces and use dependency injection to reduce coupling. You might want to think about changing the error handling, as other have suggested. It would be nice to know why, if only to improve the quality of your tests, CreateUser has failed, for instance. You could do this with exceptions or with an out parameter (which is how MembershipProvider works, if I remember correctly).
You are facing the question of "classical" vs. "mockist" approaches to testing. Or "state-verification" vs. "behaviour-verification" as described by Martin Fowler: http://martinfowler.com/articles/mocksArentStubs.html#ClassicalAndMockistTesting
Another most excellent resource is Gerard Meszaros' book "xUnit Test Patterns: Refactoring Test Code"