Organizing unit test within a test class - unit-testing

Suppose I have several unit tests in a test class ([TestClass] in VSUnit in my case). I'm trying to test just one thing in each test (doesn't mean just one Assert though). Imagine there's one test (e.g. Test_MethodA() ) that tests a method used in other tests as well. I do not want to put an assert on this method in other tests that use it to avoid duplicity/maintainability issues so I have the assert in only this one test. Now when this test fails, all tests that depend on correct execution of that tested method fail as well. I want to be able to locate the problem faster so I want to be somehow pointed to Test_MethodA. It would e.g. help if I could make some of the tests in the test class execute in a particular order and when they fail I'd start looking for cause of the failure in the first failing test. Do you have any idea how to do this?
Edit: By suggesting that a solution would be to execute the tests in a particular order I have probably went too far and in the wrong direction. I don't care about the order of the tests. It's just that some of the tests will always fail if a prequisite isn't valid. E.g. I have a test class that tests a DAO class (ok, probably not a UNIT test, but there's logic in the database stored procedures that needs to be tested but that's not the point here I think). I need to insert some records into a table in order to test that a method responsible for retrieving the records (let's call it GetAll()) gets them all in the correct order e.g. I do the insert by using a method on the DAO class. Let's call it Insert(). I have tests in place that verify that the Insert() method works as expected. Now I want to test the GetAll() method. In order to get the database in a desired state I use the Insert() method. If Insert() doesn't work, most tests for GetAll() will fail. I'd prefer to mark the tests that can't pass because Insert() doesn't work as inconclusive rather than failed. It would ease finding the cause of the problem if I know which method/test to look into first.

You can't (and shouldn't) execute unit tests in a specific order. The underlying reason for this is to prevent Interacting Tests - I realize that your motivation for requesting such a feature is different, but that's the reason why unit test frameworks don't allow you to order tests. In fact, last time I checked, xUnit.net even randomizes the order.
One could argue that the fact that some of your tests depend on a different method call on the same class is a symptom of tight coupling, but that's not always the case (state machines come to mind).
However, if possible, consider using a Back Door instead of the other method in question.
If you can't do either that or decouple the interdependency (e.g. by making the first method virtual and using the Extract and Override technique), you will have to live with it.
Here's an example:
public class MyClass
{
public virtual void FirstMethod() { // do something... }
public void SecondMethod() {}
}
Since FirstMethod is virtual, you can derive from MyClass and override its behavior. You can also use a dynamic mock to do that for you. With Moq, it would look like this:
var sutStub = new Mock<MyClass>();
// by default, Moq overrides all virtual methods without calling base
// Now invoke both methods in sequence:
sutStub.Object.FirstMethod(); // overriden by Moq, so it does nothing
sutSutb.Object.SecondMethod();

I think I would indeed have the assertion on the method_A() result in every tests relying on its result, even if this introduces some duplication. Then I would use the assertion message to point to the method_A() failure.
assert("method_A() returned true", true, rc);
Perhaps will I end extracting the method_A() call and the assertion into an helper function to remove the duplication.
Now let's imagine method_A() queries an object and returns it, or NULL when no object is found. Then this assertion is a guard ; and it is necessary with languages suchas C, C++ that do not have NullPointerException.

I'm afraid you can't do this. The only solution is to redesign your code and break it up into smaller methods so that unit tests can call these one by one. Of course this isn't always desirable.
With Visual Studio you can order your tests: see here. But I'd like to advise you to stay away from this technique as much as possible: unit tests are meant to be run anywhere, anytime and in every order.
EDIT: why is this a problem for you? All failing tests point to the same method anyway...

Related

How should I unit test functions with many subfunctions?

I'm looking to better understand I should test functions that have many substeps or subfunctions.
Let's say I have the functions
// Modify the state of class somehow
public void DoSomething(){
DoSomethingA();
DoSomethingB();
DoSomethingC();
}
Every function here is public. Each subfunction has 2 paths. So to test every path for DoSomething() I'd have 2*2*2 = 8 tests. By writing 8 tests for DoSomething() I will have indirectly tested the subfunctions too.
So should I be testing like this, or instead write unit tests for each of the subfunctions and then only write 1 test case that measures the final state of the class after DoSomething() and ignore all the possible paths? A total of 2+2+2+1 = 7 tests. But is it bad then that the DoSomething() test case will depend on the other unit test cases to have complete coverage?
There appears to be a very prevalent religious belief that testing should be unit testing. While I do not intend to underestimate the usefulness of unit testing, I would like to point out that it is just one possible flavor of testing, and its extensive (or even exclusive) use is indicative of people (or environments) that are somewhat insecure about what they are doing.
In my experience knowledge of the inner workings of a system is useful as a hint for testing, but not as an instrument for testing. Therefore, black box testing is far more useful in most cases, though that's admittedly in part because I do not happen to be insecure about what I am doing. (And that is in turn because I use assertions extensively, so essentially all of my code is constantly testing itself.)
Without knowing the specifics of your case, I would say that in general, the fact that DoSomething() works by invoking DoSomethingA() and then DoSomethingB() and then DoSomethingC() is an implementation detail that your black-box test should best be unaware of. So, I would definitely not test that DoSomething() invokes DoSomethingA(), DoSomethingB(), and DoSomethingC(), I would only test to make sure that it returns the right results, and using the knowledge that it does in fact invoke those three functions as a hint I would implement precisely those 7 tests that you were planning to use.
On the other hand, it should be noted that if DoSomethingA() and DoSomethingB() and DoSomethingC() are also public functions, then you should also test them individually, too.
Definitely test every subfunction seperately (because they're public).
It would help you find the problem if one pops up.
If DoSomething only uses other functions, I wouldn't bother writing additional tests for it. If it has some other logic, I would test it, but assume all functions inside work properly (if they're in a different class, mock them).
The point is finding what the function does that is not covered in other tests and testing that.
Indirect testing should be avoided. You should write unit tests for each function explicitly. After that You should mock submethods and test your main function. For example :
You have a method which inserts a user to DB and method is like this :
void InsertUser(User user){
var exists = SomeExternal.UserExists(user);
if(exists)
throw new Exception("bla bla bla");
//Insert codes here
}
If you want to test InsertUser function, you should mock external/sub/nested methods and test behaviour of InsertUser function.
This example creates two tests: 1 - "When user exists then Should throw Exception" 2 - "When user does not exist then Should insert user"

What is the Pattern for Unit Testing flow control

I have a method that checks some assumptions and either follows the happy path, or terminates along the unhappy paths. I've either designed it poorly, or I'm missing the method for testing that the control of the flow.
if (this.officeInfo.OfficeClosed)
{
this.phoneCall.InformCallerThatOfficeIsClosedAndHangUp();
return;
}
if (!this.operators.GetAllOperators().Any())
{
this.phoneCall.InformCallerThatNoOneIsAvailableAndSendToVoicemail();
return;
}
Call call=null;
forach(var operator in this.operators.GetAllOperators())
{
call = operator.Call();
if(call!=null) {break;}
}
and so on. I've got my dependencies injected. I've got my mocks moq'd. I can make sure that this or that is called, but I don't know how to test that the "return" happens. If TDD means I don't write a line until I have a test that fails without it, I'm stuck.
How would you test it? Or is there a way to write it that makes it more testable?
Update: Several answers have been posted saying that I should test the resultant calls, not the flow control. The problem I have with this approach, is that every test is required to setup and test the state and results of the other tests. This seems really unwieldy and brittle. Shouldn't I be able to test the first if clause alone, and then test the second one alone? Do I really need to have a logarithmically expanding set of tests that start looking like Method_WithParameter_DoesntInvokeMethod8IfMethod7IsTrueandMethod6IsTrueAndMethod5IsTrueAndMethod4IsTrueAndMethod3IsFalseAndMethod2IsTrueAndMethod1isAaaaccck()?
I think you want to test the program's outputs: for example, that when this.officeInfo.OfficeClosed then the program does invoke this.phoneCall.InformCallerThatOfficeIsClosedAndHangUp() and does not invoke other methods such as this.operators.GetAllOperators().
I think that your test does this by asking its mock objects (phoneCall, etc.) which of their methods was invoked, or by getting them to throw an exception if any of their methods are invoked unexpectedly.
One way to do it is to make a log file of the program's inputs (e.g. 'OfficeClosed returns true') and outputs: then run the test, let the test generate the log file, and then assert that the contents of the generated log file match the expected log file contents for that test.
I'm not sure that's really the right approach. You care about whether or not the method produced the expected result, not necessarily how control "flowed" through the particular method. For example, if phoneCall.InformCallerThatOfficeIsClosedAndHangUp is called, then I assume some result is recorded somewhere. So in your unit test, you would be asserting that result was indeed recorded (either by checking a database record, file, etc.).
With that said, it's important to ensure that your unit tests indeed cover your code. For that, you can use a tool like NCover to ensure that all of your code is being excercised. It'll generate a coverage report which will show you exactly which lines were executed by your unit tests and more importantly, which ones weren't.
You could go ballistic and use a strategy pattern. Something along the lines of having an interface IHandleCall, with a single void method DoTheRightThing(), and 3 classes HandleOfficeIsClosed, HandleEveryoneIsBusy, HandleGiveFirstOperatorAvailable, which implement the interface. And then have code like:
IHandleCall handleCall;
if (this.officeInfo.OfficeClosed)
{
handleCall = new HandleOfficeIsClosed();
}
else if other condition
{
handleCall = new OtherImplementation();
}
handleCall.DoTheRightThing();
return;
That way you can get rid of the multiple return points in your method. Note that this is a very dirty outline, but essentially at that point you should extract the if/else into some factory, and then the only thing you have to test is that your class calls the factory, and that handleCall.DoTheRightThing() is called - (and of course that the factory returns the right strategy).
In any case, because you have already guarded against no operator available, you could simplify the end to:
var operator = this.operators.FindFirst();
call = operator.Call();
Don't test the flow control, just test the expected behavior. That is, unit testing does not care about the implementation details, only that the behavior of the method matches the specifications of the method. So if Add(int x, int y) should produce the result 4 on input x = 2, y = 2, then test that the output is 4 but don't worry about how Add produced the result.
To put it another way, unit testing should be invariant under implementation details and refactoring. But if you're testing implementation details in your unit testing, then you can't refactor without breaking the unit tests. For example, if you implement a method GetPrime(int k) to return the kth prime then check that GetPrime(10) returns 29 but don't test the flow control inside the method. If you implement GetPrime using the Sieve of Eratóstenes and have tested the flow control inside the method and later refactor to use the Sieve of Atkin, your unit tests will break. Again, all that matters is that GetPrime(10) returns 29, not how it does it.
If you are stuck using TDD it's a good thing: it means that TDD drives your design and you are looking into how to change it so you can test it.
You can either:
1) verify state: check SUT state after SUT execution or
2) verify behavior: check that mock object calls complied with test expectations
If you don't like how either of these approaches look in your test it's time to refactor the code.
The pattern described by Aaron Feng and K. Scott Allen would solve for my problem and it's testability concerns. The only issue I see is that it requires all the computation to be performed up front. The decision data object needs to be populated before all of the conditionals. That's great unless it requires successive round trips to the persistence storage.

How to test the function behavior in unit test?

If a function just calls another function or performs actions. How do I test it? Currently, I enforce all the functions should return a value so that I could assert the function return values. However, I think this approach mass up the API because in the production code. I don't need those functions to return value. Any good solutions?
I think mock object might be a possible solution. I want to know when should I use assert and when should I use mock objects? Is there any general guide line?
Thank you
Let's use BufferedStream.Flush() as an example method that doesn't return anything; how would we test this method if we had written it ourselves?
There is always some observable effect, otherwise the method would not exist. So the answer can be to test for the effect:
[Test]
public void FlushWritesToUnderlyingStream()
{
var memory = new byte[10];
var memoryStream = new MemoryStream(memory);
var buffered = new BufferedStream(memoryStream);
buffered.Write(0xFF);
Assert.AreEqual(0x00, memory[0]); // not yet flushed, memory unchanged
buffered.Flush();
Assert.AreEqual(0xFF, memory[0]); // now it has changed
}
The trick is to structure your code so that these effects aren't too hard to observe in a test:
explicitly pass collaborator objects,
just like how the memoryStream is passed
to the BufferedStream in the constructor.
This is called dependency
injection.
program against an interface, just
like how BufferedStream is programmed
against the Stream interface. This enables
you to pass simpler, test-friendly implementations (like MemoryStream in this case) or use a mocking framework (like MoQ or RhinoMocks), which is all great for unit testing.
Sorry for not answering straight but ... are you sure you have the exact balance in your testing?
I wonder if you are not testing too much ?
Do you really need to test a function that merely delegates to another?
Returns only for the tests
I agree with you when you write you don't want to add return values that are useful only for the tests, not for production. This clutters your API, making it less clear, which is a huge cost in the end.
Also, your return value could seem correct to the test, but nothing says that the implementation is returning the return value that corresponds to the implementation, so the test is probably not proving anything anyway...
Costs
Note that testing has an initial cost, the cost of writing the test.
If the implementation is very easy, the risk of failure is ridiculously low, but the time spend testing still accumulates (over hundred or thousands cases, it ends up being pretty serious).
But more than that, each time you refactor your production code, you will probably have to refactor your tests also. So the maintenance cost of your tests will be high.
Testing the implementation
Testing what a method does (what other methods it calls, etc) is critized, just like testing a private method... There are several points made:
this is fragile and costly : any code refactoring will break the tests, so this increases the maintenance cost
Testing a private method does not bring much safety to your production code, because your production code is not making that call. It's like verifying something you won't actually need.
When a code delegates effectively to another, the implementation is so simple that the risk of mistakes is very low, and the code almost never changes, so what works once (when you write it) will never break...
Yes, mock is generally the way to go, if you want to test that a certain function is called and that certain parameters are passed in.
Here's how to do it in Typemock (C#):
Isolate.Verify.WasCalledWithAnyArguments(()=> myInstance.WeatherService("","", null,0));
Isolate.Verify.WasCalledWithExactArguments(()=> myInstance. StockQuote("","", null,0));
In general, you should use Assert as much as possible, until when you can't have it ( For example, when you have to test whether you call an external Web service API properly, in this case you can't/ don't want to communicate with the web service directly). In this case you use mock to verify that a certain web service method is correctly called with correct parameters.
"I want to know when should I use assert and when should I use mock objects? Is there any general guide line?"
There's an absolute, fixed and important rule.
Your tests must contain assert. The presence of assert is what you use to see if the test passed or failed. A test is a method that calls the "component under test" (a function, an object, whatever) in a specific fixture, and makes specific assertions about the component's behavior.
A test asserts something about the component being tested. Every test must have an assert, or it isn't a test. If it doesn't have assert, it's not clear what you're doing.
A mock is a replacement for a component to simplify the test configuration. It is a "mock" or "imitation" or "false" component that replaces a real component. You use mocks to replace something and simplify your testing.
Let's say you're going to test function a. And function a calls function b.
The tests for function a must have an assert (or it's not a test).
The tests for a may need a mock for function b. To isolate the two functions, you test a with a mock for function b.
The tests for function b must have an assert (or it's not a test).
The tests for b may not need anything mocked. Or, perhaps b makes an OS API call. This may need to be mocked. Or perhaps b writes to a file. This may need to be mocked.

What is the unit testing strategy for method call forwarding?

I have the following scenario:
public class CarManager
{
..
public long AddCar(Car car)
{
try
{
string username = _authorizationManager.GetUsername();
...
long id = _carAccessor.AddCar(username, car.Id, car.Name, ....);
if(id == 0)
{
throw new Exception("Car was not added");
}
return id;
} catch (Exception ex) {
throw new AddCarException(ex);
}
}
public List AddCars(List cars)
{
List ids = new List();
foreach(Car car in cars)
{
ids.Add(AddCar(car));
}
return ids;
}
}
I am mocking out _reportAccessor, _authorizationManager etc.
Now I want to unittest the CarManager class.
Should I have multiple tests for AddCar() such as
AddCarTest()
AddCarTestAuthorizationManagerException()
AddCarTestCarAccessorNoId()
AddCarTestCarAccessorException()
For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()? < p/>
Please help.
There are two issues here:
Unit tests should do more than test methods one at a time. They should be designed to prove that your class can do the job it was designed for when integrated with the rest of the system. So you should mock out the dependencies and then write a test for each way in which you class will actually be used. For each (non-trivial) class you write there will be scenarios that involve the client code calling methods in a particular pattern.
There is nothing wrong with AddCars calling AddCar. You should repeat tests for error handling but only when it serves a purpose. One of the unofficial rules of unit testing is 'test to the point of boredom' or (as I like to think of it) 'test till the fear goes away'. Otherwise you would be writing tests forever. So if you are confident a test will add no value them don't write it. You may be wrong of course, in which case you can come back later and add it in. You don't have to produce a perfect test first time round, just a firm basis on which you can build as you better understand what your class needs to do.
Unit Test should focus only to its corresponding class under testing. All attributes of class that are not of same type should be mocked.
Suppose you have a class (CarRegistry) that uses some kind of data access object (for example CarPlatesDAO) which loads/stores car plate numbers from Relational database.
When you are testing the CarRegistry you should not care about if CarPlateDAO performs correctly; Since our DAO has it's own unit test.
You just create mock that behaves like DAO and returns correct or wrong values according to expected behavior. You plug this mock DAO to your CarRegistry and test only the target class without caring if all aggregated classes are "green".
Mocking allows separation of testable classes and better focus on specific functionality.
When unittesting the AddCar class, create tests that will exercise every codepath. If _authorizationManager.GetUsername() can throw an exception, create a test where your mock for this object will throw. BTW: don't throw or catch instances of Exception, but derive a meaningful Exception class.
For the AddCars method, you definitely should call AddCar. But you might consider making AddCar virtual and override it just to test that it's called with all cars in the list.
Sometimes you'll have to change the class design for testability.
Should I have multiple tests for
AddCar() such as
AddCarTest()
AddCarTestAuthorizationManagerException()
AddCarTestCarAccessorNoId()
AddCarTestCarAccessorException()
Absolutely! This tells you valuable information
For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems
like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()?
Calling AddCar from AddCars is a great idea, it avoids violating the DRY principle. Similarly, you should be repeating tests. Think of it this way - you already wrote tests for AddCar, so when testing AddCards you can assume AddCar does what it says on the tin.
Let's put it this way - imagine AddCar was in a different class. You would have no knowledge of an authorisation manager. Test AddCars without the knowledge of what AddCar has to do.
For AddCars, you need to test all normal boundary conditions (does an empty list work, etc.) You probably don't need to test the situation where AddCar throws an exception, as you're not attempting to catch it in AddCars.
Writing tests that explore every possible scenario within a method is good practice. That's how I unit test in my projects. Tests like AddCarTestAuthorizationManagerException(), AddCarTestCarAccessorNoId(), or AddCarTestCarAccessorException() get you thinking about all the different ways your code can fail which has led to me find new kinds of failures for a method I might have otherwise missed as well as improve the overall design of the class.
In a situation like AddCars() calling AddCar() I would mock the AddCar() method and count the number of times it's called by AddCars(). The mocking library I use allows me to create a mock of CarManager and mock only the AddCar() method but not AddCars(). Then your unit test can set how many times it expects AddCar() to be called which you would know from the size of the list of cars passed in.

When should I mock?

I have a basic understanding of mock and fake objects, but I'm not sure I have a feeling about when/where to use mocking - especially as it would apply to this scenario here.
Mock objects are useful when you want to test interactions between a class under test and a particular interface.
For example, we want to test that method sendInvitations(MailServer mailServer) calls MailServer.createMessage() exactly once, and also calls MailServer.sendMessage(m) exactly once, and no other methods are called on the MailServer interface. This is when we can use mock objects.
With mock objects, instead of passing a real MailServerImpl, or a test TestMailServer, we can pass a mock implementation of the MailServer interface. Before we pass a mock MailServer, we "train" it, so that it knows what method calls to expect and what return values to return. At the end, the mock object asserts, that all expected methods were called as expected.
This sounds good in theory, but there are also some downsides.
Mock shortcomings
If you have a mock framework in place, you are tempted to use mock object every time you need to pass an interface to the class under the test. This way you end up testing interactions even when it is not necessary. Unfortunately, unwanted (accidental) testing of interactions is bad, because then you're testing that a particular requirement is implemented in a particular way, instead of that the implementation produced the required result.
Here's an example in pseudocode. Let's suppose we've created a MySorter class and we want to test it:
// the correct way of testing
testSort() {
testList = [1, 7, 3, 8, 2]
MySorter.sort(testList)
assert testList equals [1, 2, 3, 7, 8]
}
// incorrect, testing implementation
testSort() {
testList = [1, 7, 3, 8, 2]
MySorter.sort(testList)
assert that compare(1, 2) was called once
assert that compare(1, 3) was not called
assert that compare(2, 3) was called once
....
}
(In this example we assume that it's not a particular sorting algorithm, such as quick sort, that we want to test; in that case, the latter test would actually be valid.)
In such an extreme example it's obvious why the latter example is wrong. When we change the implementation of MySorter, the first test does a great job of making sure we still sort correctly, which is the whole point of tests - they allow us to change the code safely. On the other hand, the latter test always breaks and it is actively harmful; it hinders refactoring.
Mocks as stubs
Mock frameworks often allow also less strict usage, where we don't have to specify exactly how many times methods should be called and what parameters are expected; they allow creating mock objects that are used as stubs.
Let's suppose we have a method sendInvitations(PdfFormatter pdfFormatter, MailServer mailServer) that we want to test. The PdfFormatter object can be used to create the invitation. Here's the test:
testInvitations() {
// train as stub
pdfFormatter = create mock of PdfFormatter
let pdfFormatter.getCanvasWidth() returns 100
let pdfFormatter.getCanvasHeight() returns 300
let pdfFormatter.addText(x, y, text) returns true
let pdfFormatter.drawLine(line) does nothing
// train as mock
mailServer = create mock of MailServer
expect mailServer.sendMail() called exactly once
// do the test
sendInvitations(pdfFormatter, mailServer)
assert that all pdfFormatter expectations are met
assert that all mailServer expectations are met
}
In this example, we don't really care about the PdfFormatter object so we just train it to quietly accept any call and return some sensible canned return values for all methods that sendInvitation() happens to call at this point. How did we come up with exactly this list of methods to train? We simply ran the test and kept adding the methods until the test passed. Notice, that we trained the stub to respond to a method without having a clue why it needs to call it, we simply added everything that the test complained about. We are happy, the test passes.
But what happens later, when we change sendInvitations(), or some other class that sendInvitations() uses, to create more fancy pdfs? Our test suddenly fails because now more methods of PdfFormatter are called and we didn't train our stub to expect them. And usually it's not only one test that fails in situations like this, it's any test that happens to use, directly or indirectly, the sendInvitations() method. We have to fix all those tests by adding more trainings. Also notice, that we can't remove methods no longer needed, because we don't know which of them are not needed. Again, it hinders refactoring.
Also, the readability of test suffered terribly, there's lots of code there that we didn't write because of we wanted to, but because we had to; it's not us who want that code there. Tests that use mock objects look very complex and are often difficult to read. The tests should help the reader understand, how the class under the test should be used, thus they should be simple and straightforward. If they are not readable, nobody is going to maintain them; in fact, it's easier to delete them than to maintain them.
How to fix that? Easily:
Try using real classes instead of mocks whenever possible. Use the real PdfFormatterImpl. If it's not possible, change the real classes to make it possible. Not being able to use a class in tests usually points to some problems with the class. Fixing the problems is a win-win situation - you fixed the class and you have a simpler test. On the other hand, not fixing it and using mocks is a no-win situation - you didn't fix the real class and you have more complex, less readable tests that hinder further refactorings.
Try creating a simple test implementation of the interface instead of mocking it in each test, and use this test class in all your tests. Create TestPdfFormatter that does nothing. That way you can change it once for all tests and your tests are not cluttered with lengthy setups where you train your stubs.
All in all, mock objects have their use, but when not used carefully, they often encourage bad practices, testing implementation details, hinder refactoring and produce difficult to read and difficult to maintain tests.
For some more details on shortcomings of mocks see also Mock Objects: Shortcomings and Use Cases.
A unit test should test a single codepath through a single method. When the execution of a method passes outside of that method, into another object, and back again, you have a dependency.
When you test that code path with the actual dependency, you are not unit testing; you are integration testing. While that's good and necessary, it isn't unit testing.
If your dependency is buggy, your test may be affected in such a way to return a false positive. For instance, you may pass the dependency an unexpected null, and the dependency may not throw on null as it is documented to do. Your test does not encounter a null argument exception as it should have, and the test passes.
Also, you may find its hard, if not impossible, to reliably get the dependent object to return exactly what you want during a test. That also includes throwing expected exceptions within tests.
A mock replaces that dependency. You set expectations on calls to the dependent object, set the exact return values it should give you to perform the test you want, and/or what exceptions to throw so that you can test your exception handling code. In this way you can test the unit in question easily.
TL;DR: Mock every dependency your unit test touches.
Rule of thumb:
If the function you are testing needs a complicated object as a parameter, and it would be a pain to simply instantiate this object (if, for example it tries to establish a TCP connection), use a mock.
You should mock an object when you have a dependency in a unit of code you are trying to test that needs to be "just so".
For example, when you are trying to test some logic in your unit of code but you need to get something from another object and what is returned from this dependency might affect what you are trying to test - mock that object.
A great podcast on the topic can be found here