I am very new to Test Driven Development and cannot figure out how to write effective tests for a class I wrote. The class is as follows (Java):
public class MyServiceClassImpl implements MyService {
private someExternalClient client;
private anotherExternalClient anotherClient;
public MyServiceClassImpl() {
client = someExternalClient.getInstance();
anotherClient = anotherExternalClient(client);
}
public String methodWhichDoesSomething(String query) {
return anotherClient.getResponse(query);
}
}
For the test, I try a few queries and compare the response I get with the response I expect (I expect it because I know what anotherClient will return). It works alright but this is technically an integration test since I am calling an external dependency. I do not understand how to write "unit" tests in this case. More specifically, I don't know how to mock the dependencies since the fields are private, there are no setters and the constructor doesn't take any parameters. How would I "supply" the instance of the class with my mocks even if I created them? I wrote the class myself too so please let me know if I should re-design the class, maybe provide getters and setters?
This is a very common situation that most developers falls into. The questions how to make the code testable. Rule of thumb "If you don't have any security concerns, do not afraid to change design so your routines are testable." This is actually a very good thing, because you SUT (System Under Test) API is appealing to its clients and easier to make changes and extend.
In you case leave your Integration Test as it is because it tests the whole system with database interaction/config etc.
Generally what is important is the Unit Test. But looking at you code the method
methodWhichDoesSomething(String query)
hardly has any behavior at all. It only calls another client to return a response.
So you need to decide whether you need really write a Unit Test for this. I would not recommend as it does not have any behavior to Unit Test.
But if you really want to Unit Test, whether the GetResponse(..) method has been called with expected parameter type is a candidate.
In order to that Inject your dependency AnotherExternalClient into you SUT (System Under Test).
public MyServiceClassImpl(AnotherExternalClient externalClient)
{
In you test setup a mock on AnotherExternalClient and verify whether the method has been invoked. Use this constructor injection if your parameter is a mandatory type to your MyServiceClassImpl. If not simply use the property injection if the injection is an optional.
UPDATE
Reg. "Inject your dependency"
The instance you returning from anotherExternalClient(clent);, which is type of anotherExternalClient can be injected into your SUT (System Under Test) MyServiceClassImpl. The way you inject is either with a property or via constructor. I will explains this a bit later.
You don't have to worry about writing code like
client = someExternalClient.getInstance();
as this can be externalized and return the client which then used to return the anotherExternalClient.
In otherwords your SUT (System Under Test) MyServiceClassImpl should only care about anotherExternalClient not someExternalClient. Having less dependency like this simplifies your design and make it easier to Unit test.
Reg. "Property Injection vs Ctro Injection"
I would not repeat my self, here is another SO question has some information on this.
Hope this helps.
This is critical because when it comes to Unit testing you can easily provide you with the mock/fake implementation for testing.
I've covered the internal implementation of my class with UnitTests. Is it still useful to test my public interface now (which is one function that is little more then a chain of calls to the internal methods)?
It feels like I would be adding additional tests that test the same thing(s).
Or is testing the public interface more an integration test (even though I've stubbed my data access so it's all processed in memory) in that way that it tests whether all the UnitTested methods work well together?
Example:
internal bool internalCheck() {
// complex logic that is being unit tested
}
internal void internalDoSomething() {
// do stuff. is being unit tested
}
public void DoIt() {
if (internalCheck()) {
internalDoSomething();
}
}
Now if I am going to add tests that test DoIt, I will basically end up retesting all logic flows for internalCheck and assert that when it returns true, the internalDoSomething is being called.
Hmm I think I've figured it out:
I need to mock the class itself and check just that the correct calls are being made, pretty much ignoring real inputs/outputs. To test the public method, rather than retesting the internalCheck, I use a mocking framework to have internalCheck return what flow I want to test and then verify that the correct methods have been called.
It depends how you have defined your unit. Usually the is the class, so this would still be a unit test.
Even if you have have tested the internal methods, it is still important to test the public interface, to make sure that the internal methods work together.
I follow the technique specified in Roy Osherove's The Art Of Unit Testing book while naming test methods - MethodName_Scenario_Expectation.
It suits perfectly well for my 'unit' tests. But,for tests that I write in 'controller' or 'coordinator' class, there isn't necessarily a method which I want to test.
For these tests, I generate multiple conditions which make up one scenario and then I verify the expectation. For example, I may set some properties on different instances, generate an event and then verify that my expectation from controller/coordinator is being met. Now, my controller handles events using a private event handler. Here my scenario is that, I set some properties, say 3
condition1,condition2 and condition3
Also, my scenario includes
an event is raised
I don't have a method name as my event handler is private. How do I name such a test method?
I would use several tests and a different naming convention in this case:
Name the test class ClassName_Scenario (so you would have more than one test class for the same class)
Name the test methods Expectation1, Expectation2, Expectation3...
Also, instead of initializing the context in each test method, it would be initialized in a [SetUp] method
So you would have something like this:
[TestFixture]
public class ClassName_WhenScenarioX
{
[SetUp]
void InitContextForScenarioX()
{
}
[Test]
void ShouldDoY()
{
Assert.That(...)
}
[Test]
void ShouldRaiseEventZ()
{
Assert.That(...)
}
}
Note that this will only work if the order of execution of your asserts is not important (each test is independent, even if they all depend on the same initial context)
I tend to use almost full sentences to describe what the tested class should actually be providing. This way unit tests are almost a documentation of the class. I tend to avoid adding technical details to the name of the test though, because (1) these are in the content of the test anyway (2) someone might change the content of the test but not the name.
Examples:
[TestFixture]
public class ClassNameTests
{
[SetUp]
void BeforeEveryTest()
{
}
[Test]
void ParsesCsvStringsToLists()
{
Assert.That(...)
}
[Test]
void ThrowsMyExceptionWhenInputStringIsMalformed()
{
Assert.That(...)
}
}
I usually name my test methods after the scenario (and, if needed, its subcases), but I rarely include the method name or the expectations into the test name.
To me the scenario is most important, and this is what is not always obvious from the code, as it is on a higher level of abstraction than the code. So I need a nice descriptive test name to communicate it well. However, the called method is seen in the code, similarly the asserts / annotations document the expectations. And I am a big fan of the KISS principle...
I must add that I am working with legacy code, and our scenarios and unit tests typically are more bulky then a textbook unit test would be. Also, the interfaces we test are fairly simple, often having a single "execute" style of method per class.
If condition1,condition2 and condition3 is a business rule then name it after the rule.
When unit testing a codebase, what are the tell-tale signs that I need to utilise mock objects?
Would this be as simple as seeing a lot of calls to other objects in the codebase?
Also, how would I unit test methods which don't return values? So if I a method is returning void but prints to a file, do I just check the file's contents?
Mocking is for external dependencies, so that's literally everything, no? File system, db, network, etc...
If anything, I probably over use mocks.
Whenever a class makes a call to another, generally I mock that call out, and I verify that the call was made with the correct parameters. Else where, I'll have a unit test that checks the concrete code of the mocked out object behaves correctly.
Example:
[Test]
public void FooMoo_callsBarBaz_whenXisGreaterThan5()
{
int TEST_DATA = 6;
var bar = new Mock<Bar>();
bar.Setup(x => x.Baz(It.Is<int>(i == TEST_DATA)))
.Verifiable();
var foo = new Foo(bar.Object);
foo.moo(TEST_DATA);
bar.Verify();
}
...
[Test]
public void BarBaz_doesSomething_whenCalled()
{
// another test
}
The thing for me is, if I try to test lots of classes as one big glob, then there's usually tonnes of setup code. Not only is this quite confusing to read as you try to get your head around all the dependencies, it's very brittle when changes need to be made.
I much prefer small succinct tests. Easier to write, easier to maintain, easier to understand the intent of the test.
Mocks/stubs/fakes/test doubles/etc. are fine in unit tests, and permit testing the class/system under test in isolation. Integration tests might not use any mocks; they actually hit the database or other external dependency.
You use a mock or a stub when you have to. Generally this is because the class you're trying to test has a dependency on an interface. For TDD you want to program to interfaces, not implementations, and use dependency injection (generally speaking).
A very simple case:
public class ClassToTest
{
public ClassToTest(IDependency dependency)
{
_dependency = dependency;
}
public bool MethodToTest()
{
return _dependency.DoSomething();
}
}
IDependency is an interface, possibly one with expensive calls (database access, web service calls, etc.). A test method might contain code similar to:
// Arrange
var mock = new Mock<IDependency>();
mock.Setup(x => x.DoSomething()).Returns(true);
var systemUnderTest = new ClassToTest(mock.Object);
// Act
bool result = systemUnderTest.MethodToTest();
// Assert
Assert.That(result, Is.True);
Note that I'm doing state testing (as #Finglas suggested), and I'm only asserting against the system under test (the instance of the class I'm testing). I might check property values (state) or the return value of a method, as this case shows.
I recommend reading The Art of Unit Testing, especially if you're using .NET.
Unit tests are only for one piece of code that works autonomously within itself. This means that it doesn't depend on other objects to do its work. You should use mocks if you are doing Test-Driven programming or Test-First programming. You would create a mock (or stub as I like to call it) of the function you will be creating and set certain conditions for the test to pass. Originally the function returns false and the test fails, which is expected ... then you write the code to do the real work until it passes.
But what I think you are referring to is integration testing, not unit testing. In that case, you should use mocks if you are waiting for other programmers to finish their work and you currently don't have access to the functions or objects they are creating. If you know the interface, which hopefully you do otherwise mocking is pointless and a waste of time, then you can create a dumbed-down version of what you are hoping to get in the future.
In short, mocks are best utilized when you are waiting for others and need something there in order to finish your work.
You should try to always return a value if possible. Sometimes you run into problems where you are already returning something, but in C and C++ you can have output parameters and then use the return value for error checking.
I know how I use these terms, but I'm wondering if there are accepted definitions for faking, mocking, and stubbing for unit tests? How do you define these for your tests? Describe situations where you might use each.
Here is how I use them:
Fake: a class that implements an interface but contains fixed data and no logic. Simply returns "good" or "bad" data depending on the implementation.
Mock: a class that implements an interface and allows the ability to dynamically set the values to return/exceptions to throw from particular methods and provides the ability to check if particular methods have been called/not called.
Stub: Like a mock class, except that it doesn't provide the ability to verify that methods have been called/not called.
Mocks and stubs can be hand generated or generated by a mocking framework. Fake classes are generated by hand. I use mocks primarily to verify interactions between my class and dependent classes. I use stubs once I have verified the interactions and am testing alternate paths through my code. I use fake classes primarily to abstract out data dependencies or when mocks/stubs are too tedious to set up each time.
You can get some information :
From Martin Fowler about Mock and Stub
Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production
Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it 'sent', or maybe only how many messages it 'sent'.
Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.
From xunitpattern:
Fake: We acquire or build a very lightweight implementation of the same functionality as provided by a component that the SUT depends on and instruct the SUT to use it instead of the real.
Stub : This implementation is configured to respond to calls from the SUT with the values (or exceptions) that will exercise the Untested Code (see Production Bugs on page X) within the SUT. A key indication for using a Test Stub is having Untested Code caused by the inability to control the indirect inputs of the SUT
Mock Object that implements the same interface as an object on which the SUT (System Under Test) depends. We can use a Mock Object as an observation point when we need to do Behavior Verification to avoid having an Untested Requirement (see Production Bugs on page X) caused by an inability to observe side-effects of invoking methods on the SUT.
Personally
I try to simplify by using : Mock and Stub. I use Mock when it's an object that returns a value that is set to the tested class. I use Stub to mimic an Interface or Abstract class to be tested. In fact, it doesn't really matter what you call it, they are all classes that aren't used in production, and are used as utility classes for testing.
Stub - an object that provides predefined answers to method calls.
Mock - an object on which you set expectations.
Fake - an object with limited capabilities (for the purposes of testing), e.g. a fake web service.
Test Double is the general term for stubs, mocks and fakes. But informally, you'll often hear people simply call them mocks.
I am surprised that this question has been around for so long and nobody has as yet provided an answer based on Roy Osherove's "The Art of Unit Testing".
In "3.1 Introducing stubs" defines a stub as:
A stub is a controllable replacement for an existing dependency
(or collaborator) in the system. By using a stub, you can test your code without
dealing with the dependency directly.
And defines the difference between stubs and mocks as:
The main thing to remember about mocks versus stubs is that mocks are just like stubs, but you assert against the mock object, whereas you do not assert against a stub.
Fake is just the name used for both stubs and mocks. For example when you don't care about the distinction between stubs and mocks.
The way Osherove's distinguishes between stubs and mocks, means that any class used as a fake for testing can be both a stub or a mock. Which it is for a specific test depends entirely on how you write the checks in your test.
When your test checks values in the class under test, or actually anywhere but the fake, the fake was used as a stub. It just provided values for the class under test to use, either directly through values returned by calls on it or indirectly through causing side effects (in some state) as a result of calls on it.
When your test checks values of the fake, it was used as a mock.
Example of a test where class FakeX is used as a stub:
const pleaseReturn5 = 5;
var fake = new FakeX(pleaseReturn5);
var cut = new ClassUnderTest(fake);
cut.SquareIt;
Assert.AreEqual(25, cut.SomeProperty);
The fake instance is used as a stub because the Assert doesn't use fake at all.
Example of a test where test class X is used as a mock:
const pleaseReturn5 = 5;
var fake = new FakeX(pleaseReturn5);
var cut = new ClassUnderTest(fake);
cut.SquareIt;
Assert.AreEqual(25, fake.SomeProperty);
In this case the Assert checks a value on fake, making that fake a mock.
Now, of course these examples are highly contrived, but I see great merit in this distinction. It makes you aware of how you are testing your stuff and where the dependencies of your test are.
I agree with Osherove's that
from a pure maintainability perspective, in my tests using mocks creates more trouble than not using them. That has been my experience, but I’m always learning something new.
Asserting against the fake is something you really want to avoid as it makes your tests highly dependent upon the implementation of a class that isn't the one under test at all. Which means that the tests for class ActualClassUnderTest can start breaking because the implementation for ClassUsedAsMock changed. And that sends up a foul smell to me. Tests for ActualClassUnderTest should preferably only break when ActualClassUnderTest is changed.
I realize that writing asserts against the fake is a common practice, especially when you are a mockist type of TDD subscriber. I guess I am firmly with Martin Fowler in the classicist camp (See Martin Fowler's "Mocks aren't Stubs") and like Osherove avoid interaction testing (which can only be done by asserting against the fake) as much as possible.
For fun reading on why you should avoid mocks as defined here, google for "fowler mockist classicist". You'll find a plethora of opinions.
As mentioned by the top-voted answer, Martin Fowler discusses these distinctions in Mocks Aren't Stubs, and in particular the subheading The Difference Between Mocks and Stubs, so make sure to read that article.
Rather than focusing on how these things are different, I think it's more enlightening to focus on why these are distinct concepts. Each exists for a different purpose.
Fakes
A fake is an implementation that behaves "naturally", but is not "real". These are fuzzy concepts and so different people have different understandings of what makes things a fake.
One example of a fake is an in-memory database (e.g. using sqlite with the :memory: store). You would never use this for production (since the data is not persisted), but it's perfectly adequate as a database to use in a testing environment. It's also much more lightweight than a "real" database.
As another example, perhaps you use some kind of object store (e.g. Amazon S3) in production, but in a test you can simply save objects to files on disk; then your "save to disk" implementation would be a fake. (Or you could even fake the "save to disk" operation by using an in-memory filesystem instead.)
As a third example, imagine an object that provides a cache API; an object that implements the correct interface but that simply performs no caching at all but always returns a cache miss would be a kind of fake.
The purpose of a fake is not to affect the behavior of the system under test, but rather to simplify the implementation of the test (by removing unnecessary or heavyweight dependencies).
Stubs
A stub is an implementation that behaves "unnaturally". It is preconfigured (usually by the test set-up) to respond to specific inputs with specific outputs.
The purpose of a stub is to get your system under test into a specific state. For example, if you are writing a test for some code that interacts with a REST API, you could stub out the REST API with an API that always returns a canned response, or that responds to an API request with a specific error. This way you could write tests that make assertions about how the system reacts to these states; for example, testing the response your users get if the API returns a 404 error.
A stub is usually implemented to only respond to the exact interactions you've told it to respond to. But the key feature that makes something a stub is its purpose: a stub is all about setting up your test case.
Mocks
A mock is similar to a stub, but with verification added in. The purpose of a mock is to make assertions about how your system under test interacted with the dependency.
For example, if you are writing a test for a system that uploads files to a website, you could build a mock that accepts a file and that you can use to assert that the uploaded file was correct. Or, on a smaller scale, it's common to use a mock of an object to verify that the system under test calls specific methods of the mocked object.
Mocks are tied to interaction testing, which is a specific testing methodology. People who prefer to test system state rather than system interactions will use mocks sparingly if at all.
Test doubles
Fakes, stubs, and mocks all belong to the category of test doubles. A test double is any object or system you use in a test instead of something else. Most automated software testing involves the use of test doubles of some kind or another. Some other kinds of test doubles include dummy values, spies, and I/O blackholes.
The thing that you assert on it is called a mock object.
Everything else that just helped the test run is a stub.
To illustrate the usage of stubs and mocks, I would like to also include an example based on Roy Osherove's "The Art of Unit Testing".
Imagine, we have a LogAnalyzer application which has the sole functionality of printing logs. It not only needs to talk to a web service, but if the web service throws an error, LogAnalyzer has to log the error to a different external dependency, sending it by email to the web service administrator.
Here’s the logic we’d like to test inside LogAnalyzer:
if(fileName.Length<8)
{
try
{
service.LogError("Filename too short:" + fileName);
}
catch (Exception e)
{
email.SendEmail("a","subject",e.Message);
}
}
How do you test that LogAnalyzer calls the email service correctly when the web service throws an exception?
Here are the questions we’re faced with:
How can we replace the web service?
How can we simulate an exception from the web service so that we can
test the call to the email service?
How will we know that the email service was called correctly or at
all?
We can deal with the first two questions by using a stub for the web service. To solve the third problem, we can use a mock object for the email service.
A fake is a generic term that can be used to describe either a stub or a mock.In our test, we’ll have two fakes. One will be the email service mock, which we’ll use to verify that the correct parameters were sent to the email service. The other will be a stub that we’ll use to simulate an exception thrown from the web service. It’s a stub because we won’t be using the web service fake to verify the test result, only to make sure the test runs correctly. The email service is a mock because we’ll assert against it that it was called correctly.
[TestFixture]
public class LogAnalyzer2Tests
{
[Test]
public void Analyze_WebServiceThrows_SendsEmail()
{
StubService stubService = new StubService();
stubService.ToThrow= new Exception("fake exception");
MockEmailService mockEmail = new MockEmailService();
LogAnalyzer2 log = new LogAnalyzer2();
log.Service = stubService
log.Email=mockEmail;
string tooShortFileName="abc.ext";
log.Analyze(tooShortFileName);
Assert.AreEqual("a",mockEmail.To); //MOCKING USED
Assert.AreEqual("fake exception",mockEmail.Body); //MOCKING USED
Assert.AreEqual("subject",mockEmail.Subject);
}
}
Unit testing - is an approach of testing where the unit(class, method) is under control.
Test double - is not a primary object(from OOP world). It is a realisation which is created temporary to test, check or during development. And they are created for closing dependencies of tested unit(method, class...)
Test doubles types:
fake object is a real implementation of interface(protocol) or an extend which is using an inheritance or other approaches which can be used to create - is dependency. Usually it is created by developer as a simplest solution to substitute some dependency
stub object is a bare object(0, nil and methods without logic) with extra state which is predefined(by developer) to define returned values. Usually it is created by framework
class StubA: A {
override func foo() -> String {
return "My Stub"
}
}
mock object is very similar to stub object but the extra state is changed during program execution to check if something happened(method was called, arguments, when, how often...).
class MockA: A {
var isFooCalled = false
override func foo() -> String {
isFooCalled = true
return "My Mock"
}
}
spy object is a real object with a "partial mocking". It means that you work with a non-double object except mocked behavior
dummy object is object which is necessary to run a test but no one variable or method of this object is not called.
stub vs mock
Martin Fowler said
There is a difference in that the stub uses state verification while the mock uses behavior verification.
[Mockito mock vs spy]
All of them are called Test Doubles and used to inject the dependencies that your test case needs.
Stub:
It already has a predefined behavior to set your expectation
for example, stub returns only the success case of your API response
A mock is a smarter stub. You verify your test passes through it.
so you could make amock that return either the success or failure success depending on the condition could be changed in your test case.
If you are familiar with Arrange-Act-Assert, then one way of explaining the difference between stub and mock that might be useful for you, is that stubs belong to the arrange section as they are for arranging input state, and mocks belong to the assert section as they are for asserting results against.
Dummies don't do anything. They are just for filling up parameter lists, so that you don't get undefined or null errors. They also exist to satisfy the type checker in statically typed languages, so that you can be allowed to compile and run.
Stub, Fakes and Mocks have different meanings across different sources. I suggest you to introduce your team internal terms and agree upon their meaning.
I think it is important to distinguish between two approaches:
- behaviour validation (implies behaviour substitution)
- end-state validation (implies behaviour emulation)
Consider email sending in case of error. When doing behaviour validation - you check that method Send of IEmailSender was executed once. And you need to emulate return result of this method, return Id of the sent message. So you say: "I expect that Send will be called. And I will just return dummy (or random) Id for any call". This is behaviour validation:
emailSender.Expect(es=>es.Send(anyThing)).Return((subject,body) => "dummyId")
When doing state validation you will need to create TestEmailSender that implements IEmailSender. And implement Send method - by saving input to some data structure that will be used for future state verification like array of some objects SentEmails and then it tests you will check that SentEmails contains expected email. This is state validation:
Assert.AreEqual(1, emailSender.SentEmails.Count)
From my readings I understood that Behaviour validation usually called Mocks.
And State validation usually called Stubs or Fakes.
It's a matter of making the tests expressive. I set expectations on a Mock if I want the test to describe a relationship between two objects. I stub return values if I'm setting up a supporting object to get me to the interesting behaviour in the test.
stub and fake are objects in that they can vary their response based on input parameters. the main difference between them is that a Fake is closer to a real-world implementation than a stub. Stubs contain basically hard-coded responses to an expected request. Let see an example:
public class MyUnitTest {
#Test
public void testConcatenate() {
StubDependency stubDependency = new StubDependency();
int result = stubDependency.toNumber("one", "two");
assertEquals("onetwo", result);
}
}
public class StubDependency() {
public int toNumber(string param) {
if (param == “one”) {
return 1;
}
if (param == “two”) {
return 2;
}
}
}
A mock is a step up from fakes and stubs. Mocks provide the same functionality as stubs but are more complex. They can have rules defined for them that dictate in what order methods on their API must be called. Most mocks can track how many times a method was called and can react based on that information. Mocks generally know the context of each call and can react differently in different situations. Because of this, mocks require some knowledge of the class they are mocking. a stub generally cannot track how many times a method was called or in what order a sequence of methods was called. A mock looks like:
public class MockADependency {
private int ShouldCallTwice;
private boolean ShouldCallAtEnd;
private boolean ShouldCallFirst;
public int StringToInteger(String s) {
if (s == "abc") {
return 1;
}
if (s == "xyz") {
return 2;
}
return 0;
}
public void ShouldCallFirst() {
if ((ShouldCallTwice > 0) || ShouldCallAtEnd)
throw new AssertionException("ShouldCallFirst not first thod called");
ShouldCallFirst = true;
}
public int ShouldCallTwice(string s) {
if (!ShouldCallFirst)
throw new AssertionException("ShouldCallTwice called before ShouldCallFirst");
if (ShouldCallAtEnd)
throw new AssertionException("ShouldCallTwice called after ShouldCallAtEnd");
if (ShouldCallTwice >= 2)
throw new AssertionException("ShouldCallTwice called more than twice");
ShouldCallTwice++;
return StringToInteger(s);
}
public void ShouldCallAtEnd() {
if (!ShouldCallFirst)
throw new AssertionException("ShouldCallAtEnd called before ShouldCallFirst");
if (ShouldCallTwice != 2) throw new AssertionException("ShouldCallTwice not called twice");
ShouldCallAtEnd = true;
}
}
According to the book "Unit Testing Principles, Practices, and Patterns by Vladimir Khorikov" :
Mocks: help to emulate and examine outcoming interactions. These interactions are calls the SUT makes to its dependencies to change their state. In other words it helps to examine the interaction (behaviour) of SUT and its dependencies. mocks could be :
Spy : created manually
Mocks : created using framework
Stubs: helps to emulate incoming interactions. These interactions are calls the SUT makes to its dependencies to get input data. IN other words it helps to test the data passed to SUT. It could be 3 types
Fake: is usually implemented to replace a dependency that doesn’t yet exist.
Dummy: is hard-coded value.
Stubs: Fledged dependency that you configure to return different values for different scenarios.
In xUnit Test Patterns book by Gerard Meszaros There is a nice table that gives a good insight about differences
I tend to use just 2 terms - Fake and Mock.
Mock only when using a mocking framework like Moq for example because it doesn't seem right to refer to it as a Fake when it's being created with new Mock<ISomething>() - while you can technically use a mocking framework to create Stubs or Fakes, it just seems kind of dumb to call it that in this situation - it has to be a Mock.
Fake for everything else. If a Fake can be summarised as an implementation with reduced capabilities, then I think a Stub could also be a Fake (and if not, who cares, everyone knows what I mean, and not once has anyone ever said "I think you'll find that's a Stub")