I have created following four tests in a Test class that tests a findCompany() method of a CompanyService.
#Test
public void findCompany_CompanyIdIsZero() {
exception.expect(IllegalArgumentException.class);
companyService.findCompany(0);
}
#Test
public void findCompany_CompanyIdIsNegative() {
exception.expect(IllegalArgumentException.class);
companyService.findCompany(-100);
}
#Test
public void findCompany_CompanyIdDoesntExistInDatabase() {
Company storedCompany = companyService.findCompany(100000);
assertNull(storedCompany1);
}
#Test
public void findCompany_CompanyIdExistsInDatabase() {
Company company = new Company("FAL", "Falahaar");
companyService.addCompany(company);
Company storedCompany1 = companyService.findCompany(company.getId());
assertNotNull(storedCompany1);
}
My understanding says that the first three of these are unit tests. They test the behavior of the findCompany() method, checking how the method will respond on different inputs.
The fourth test, though placed in the same class, actually seems to be an integration test to me. It requires a Company to be added to the database first, so that it can be found later on. This introduces external dependencies - addCompany() and database.
Am I going right? If yes, then how should I unit test finding an existing object? Just mock the service to "find" one? I think that kills the intent of the test.
I appreciate any guidance here.
I look at it this way: the "unit" you are testing here is the CompanyService. In this sense all of your tests look like unit tests to me. Underneath your service, though, there may be another service (you mention a database) that this test is also exercising? This could start to blur the lines with integration testing a bit, but you have to ask yourself if it matters. You could stub out any such underlying service, and you may want to if:
The underlying service is slow to set up or use, making your unit tests too slow.
You want to be sure the behaviour of this test is unaffected by the underlying service - i.e. this test should only fail if there is a bug in CompanyService.
In my experience, provided the underlying service is fast enough I don't worry too much about my unit test relying on it. I don't mind a bit of integration leaking into my unit tests, as it has benefits (more integration coverage) and rarely causes a problem. If it does cause problems you can always come back to it and add stubbing to improve the isolation.
[1,2,3,4] could be unit-based (mocked | not mocked) and integration-based tests. It depends what you want to test.
Why use mocking? As Jason Sankey said ...test only service tier not underlaying tier.
Why use mocking? Your bussiness logic can have most various forms. So you can write several test for one service method, eg. create person (no address - exception, no bank account - exception, person does not have filled not-null attributes - exception).
Can you imagine that each test requested database in order to test all possibility exception states (no adress, no bank account etc.)? There is too much work to fill database in order to test all exception states. Why not to use mocked objects which eg. act like 'crippled' objects which do not contains expected values. Each test construct own 'crippled' mock object.
Mocking various states === your test will be simply as possible because each test method will be test only one state. These test will be clear and easy to understand and maintance. This is one of goals which I want to reach if I write a test.
Related
I've read tons of articles, seen tons of screencasts about TDD, but I'm still struggling with using it in real world project. My main issue is I don't know where to start, what test should be the first one.
Suppose I have to write client library calling external system's methods (e.g. notification).
I want this client to work as follows
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Response code = client.notifyOnEvent(Event.LIMIT_REACHED, 100); // some params of call
There is some translation and message format preparation behind the scenes, so I'd like to hide it from my client apps.
I don't know where and how to start.
Should I make up some rough classes set for this library?
Should I start with testing NotificationClient as below
public void testClientSendInvalidEventCommand() {
NotificationClient client = new NotificationClient(...);
Response code = client.notifyOnEvent(Event.WRONG_EVENT);
assertEquals(1223, code.codeValue());
}
If so, with such test I'm forced to write complete working implementation at once, with no baby steps as TDD states. I can mock out sosmething in Client but then I have to know this thing to be mocked upfront, so I need some upfront desing to be made.
Maybe I should start from the bottom, test this message formatting component first and then use it in right client test?
What way is the right one to go?
Should we always start from top (how to deal with this huge step required)?
Can we start with any class realizing tiny part of desired feature (as Formatter in this example)?
If I'd know where to hit with my tests it'd be a lot easier for me to proceed.
I'd start with this line:
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Sounds like we need a NotificationClient, which needs a client ID. That's an easy thing to test for. My first test might look something like:
public void testNewClientAbcd1234HasClientId() {
NotificationClient client = new NotificationClient("abcd1234");
assertEquals("abcd1234", client.clientId());
}
Of course, it won't compile at first - not until I'd written a NotificationClient class with a constructor that takes a string parameter and a clientId() method that returns a string - but that's part of the TDD cycle.
public class NotificationClient {
public NotificationClient(string clientId) {
}
public string clientId() {
return "";
}
}
At this point, I can run my test and watch it fail (because I've hard-coded clientId()'s return to be an empty string). Once I've got my failing unit test, I write just enough production code (in NotificationClient) to get the test to pass:
public string clientId() {
return "abcd1234";
}
Now all my tests pass, so I can consider what to do next. The obvious (well, obvious to me) next step is to make sure that I can create clients whose ID isn't "abcd1234":
public void testNewClientBcde2345HasClientId() {
NotificationClient client = new NotificationClient("bcde2345");
assertEquals("bcde2345", client.clientId());
}
I run my test suite and observe that testNewClientBcde2345HasClientId() fails while testNewClientAbcd1234HasClientId() passes, and now I've got a good reason to add a member variable to NotificationClient:
public class NotificationClient {
private string _clientId;
public NotificationClient(string clientId) {
_clientId = clientId;
}
public string clientId() {
return _clientId;
}
}
Assuming no typographical errors have snuck in, that'll get all my tests to pass, and I can move on to whatever the next step is. (In your example, it would probably be testing that notifyOnEvent(Event.WRONG_EVENT) returns a Response whose codeValue() equals 1223.)
Does that help any?
Don't confuse acceptance tests that hook into each end of your application, and form an executable specifications with unit tests.
If you are doing 'pure' TDD you write an acceptance test which drives the unit tests that drive the implementation. testClientSendInvalidEventCommand is your acceptance test, but depending on how complicated things are you will delegate the implementation to multiple classes you can unit test separately.
How complicated things get before you have to split them up to test and understand them properly is why it is called Test Driven Design.
You can choose to let tests drive your design from the bottom up or from the top down. Both work well for different developers in different situations. Either approach will force to make some of those "upfront" design decisions but that's a good thing. Making those decisions in order to write your tests is test-driven design!
In your case you have an idea what the high level external interface to the system you are developing should be so let's start there. Write a test for how you think users of your notification client should interact with it and let it fail. This test is the basis for your acceptance or integration tests and they are going to continue failing until the features they describe are finished. That's ok.
Now step down one level. What are the steps which need to occur to provide that high level interface? Can we write an integration or unit test for those steps? Do they have dependencies you had not considered which might cause you to change the notification center interface you have started to define? Keep drilling down depth-first defining behavior with failing tests until you find that you have actually reached a unit test. Now implement enough to pass that unit test and continue. Get unit tests passing until you have built enough to pass an integration test and so on. You'll eventually have completed a depth-first construction of a tree of tests and should have a well tested feature whose design was driven by your tests.
One goal of TDD is that the testing informs the design. So the fact that you need to think about how to implement your NotificationClient is a good thing; it forces you to think of (hopefully) simple abstractions up front.
Also, TDD sort of assumes constant refactoring. Your first solution probably won't be the last; so as you refine your code the tests are there to tell you what breaks, from compile errors to actual runtime issues.
So I would just jump right in and start with the test you suggested. As you create mocks, you will need to create tests for the actual implementations of what you are mocking. You will find things make sense and need to be refactored, so you will need to modify your tests as you go. That's the way it's supposed to work...
I've got a bunch of methods in my application service layer that are doing things like this:
public void Execute(PlaceOrderOnHoldCommand command)
{
var order = _repository.Load(command.OrderId);
order.PlaceOnHold();
_repository.Save(order);
}
And at present, I have a bunch of unit tests like this:
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
var repository = new Mock<IOrderRepository>();
const int orderId = 1;
var order = new Mock<IOrder>();
repository.Setup(r => r.Load(orderId)).Returns(order.Object);
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository.Object);
service.Execute(command);
repository.Verify(r => r.Load(It.Is<int>(x => x == orderId)), Times.Exactly(1));
}
[Test]
public void PlaceOrderOnHold_CallsPlaceOnHold()
{
/* blah blah */
}
[Test]
public void PlaceOrderOnHold_SavesOrderToRepository()
{
/* blah blah */
}
It seems to be debatable whether these unit tests add value that's worth the effort. I'm quite sure that the application service layer should be integration tested, though.
Should the application service layer be tested to this level of granularity, or are integration tests sufficient?
I'd write a unit test despite there also being an integration test. However, I'd likely make the test much simpler by eliminating the mocking framework, writing my own simple mock, and then combining all those tests to check that the the order in the mock repository was on hold.
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
const int orderId = 1;
var repository = new MyMockRepository();
repository.save(new MyMockOrder(orderId));
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository);
service.Execute(command);
Assert.IsTrue(repository.getOrder(orderId).isOnHold());
}
There's really no need to check to be sure that load and/or save is called. Instead I'd just make sure that the only way that MyMockRepository will return the updated order is if load and save are called.
This kind of simplification is one of the reasons that I usually don't use mocking frameworks. It seems to me that you have much better control over your tests, and a much easier time writing them, if you write your own mocks.
Exactly: it's debatable! It's really good that you are weighing the expense/effort of writing and maintaining your test against the value it will bring you - and that's exactly the consideration you should make for every test you write. Often I see tests written for the sake of testing and thereby only adding ballast to the code base.
As a guideline I usually take that I want a full integration test of every important successful scenario/use case. Other tests I'll write are for parts of the code that are likely to break with future changes, or have broken in the past. And that is definitely not all code. That's where your judgement and insight in the system and requirements comes into play.
Assuming that you have an (integration) test for service.Execute(placeOrderOnHoldCommand), I'm not really sure if it adds value to test if the service loads an order from the repository exactly once. But it could be! For instance when your service previously had a nasty bug that would hit the repository ten times for a single order, causing performance issues (just making it up). In that case, I'd rename the test to PlaceOrderOnHold_LoadsOrderFromRepositoryExactlyOnce().
So for each and every test you have to decide for yourself ... hope that helps.
Notes:
The tests you show can be perfectly valid and look well written.
Your test sequence methods seems to be inspired on the way the Execute(...) method is currently implemented. When you structure your test this way, it could be that you are tying yourself to a specific implementation. This way, tests can actually make it harder to change - make sure you're only testing the important external behavior of your class.
I usually write a single integration test of the primary scenario. By primary scenario i mean the successful path of all the code being tested. Then I write unit tests of all the other scenarios like checking all the cases in a switch, testing exception and so forth.
I think it is important to have both and yes it is possible to test it all with integration tests only, but that makes your tests long running and harder to debug. In average I think I have 10 unit tests per integration test.
I don't bother to test methods with one-liners unless something bussines logic-like happens in that line.
Update: Just to make it clear, cause I'm doing test-driven development I always write the unit tests first and typically do the integration test at the end.
Sorry for the long post...
While being introduced to a brown field project, I'm having doubts regarding certain sets of unit tests and what to think. Say you had a repostory class, wrapping a stored procedure and in the developer guide book, a certain set guidelines (rules), describe how this class should be constructured. The class could look like the following:
public class PersonRepository
{
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (new SomeProfiler("someKey"))
{
var sp = Ioc.Resolve<IPersonStoredProcedure>();
sp.addNameArguement(personName);
sp.addCityArguement(cityName);
return sp.invoke();
}
} }
Now, I would of course write some integration tests, testing that the SP can be invoked, and that the behavior is as expected. However, would I write unit tests that assert that:
Constructor for SomeProfiler with the input parameter "someKey" is called
The Constructor of PersonStoredProcedure is called
The addNameArgument method on the stored procedure is called with parameter personName
The addCityArgument method on the stored procedure is called with parameter cityName
The invoke method is called on the stored procedure -
If so, I would potentially be testing the whole structure of a method, besides the behavior. My initial thought is that it is overkill. However, in regards to the coding practices enforced by the team, these test ensure a uniform and 'correct' structure and that the next layer is called correctly (from DAL to DB, BLL to DAL etc).
In my case these type of tests, are performed for each layer of the application.
Follow up question - the use of the SomeProfiler class smells a little like a convention to me - Instead creating explicit tests for this, could one create convention styled test by using static code analysis or unittest + reflection?
Thanks in advance.
I think that your initial thought was right - this is an overkill. Although you can use reflection to make sure that the class has the methods you expect I'm not sure you want to test it that way.
Perhaps instead of unit testing you should use some tool such as FxCop/StyleCop or nDepend to make sure all of the classes in a specific assembly/dll has these properties.
Having said that I'm a believer of "only code what you need" why test that a method exist, either you use it somewhere in your code and in that can you can test the specific case or you don't - and so it's irrelevant.
Unit tests should focus on behavior, not implementation. So writing a test to verify that certain arguments are set or passed in doesn't add much value to your testing strategy.
As the example provided appears to be communicating with your database, it can't truly be considered a "unit test" as it must communicate with physical dependencies that have additional setup and preconditions, such as availability of the environment, database schema, existing data, stored-procedures, etc. Any test you write is actually verifying these preconditions as well.
In it's present condition, your best bet for these types of tests is to test the behavior provided by the class -- invoke a method on your repository and then validate that the results are what you expected. However, you'll suddenly realize that there's a hidden cost here -- the database maintains state between test runs, and you'll need additional setup or tear-down logic to ensure that the database is in a well-known state.
While I realize the intent of the question was about the testing a "black box", it seems obvious that there's some hidden magic here in your API. My preference to solve the well-known state problem is to use an in-memory database that is scoped to the current test, which isolates me from environment considerations and enables me to parallelize my integration tests. I'd wager that under the current design, there is no "seam" to programmatically introduce a database configuration so you're "hemmed in". In my experience, magic hurts.
However, a slight change to the existing design solves this problem and the "magic" goes away:
public class PersonRepository : IPersonRepository
{
private ConnectionManager _mgr;
public PersonRepository(ConnectionManager mgr)
{
_mgr = mgr;
}
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (var p = _mgr.CreateProfiler("somekey"))
{
var sp = new PersonStoredProcedure(p);
sp.addArguement("name", personName);
sp.addArguement("city", cityName);
return sp.invoke();
}
}
}
I have this simple method which calls the TFS (Team foundation server) API to get WorkItemCollection object. I have just converted in to an entity class and also added it in cache. As you can see this is very simple.
How should i unit test this method. Only the important bit it does is calls TFS API. Is it worth testing such methods? If yes then how should we test it?
One way I can think is I can mock call to Query.QueryWorkItemStore(query) and return an object of type “WorkItemCollection” and see finally this method converts “WorkItemCollection” to List. And check if it was added to cache or not.
Also as I am using dependency injection pattern her so I am injecting dependency for
cache
Query
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
public virtual List<Sprint> Sprint(string query)
{
List<Sprint> sprints =
Cache.Get<List<Sprint>>(query);
if (sprints == null)
{
WorkItemCollection items =
Query.QueryWorkItemStore(query);
sprints = new List<Sprint>();
foreach (WorkItem i in items)
{
Sprint sprint = new Sprint
{
ID = i.Id,
IterationPath = i.IterationPath,
AreaPath = i.AreaPath,
Title = i.Title,
State = i.State,
Goal = i.Description,
};
sprints.Add(sprint);
}
Cache.Add(sprints, query,
this.CacheExpiryInterval);
}
return sprints;
}
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
In your unit tests, you should pass a mock. There are several reasons:
A mock is transparent: it allows you to check that the code under test did the right thing with the mock.
A mock gives you full control, allowing you to test scenarios that are difficult or impossible to create with the real server (e.g. throw IOException)
A mock is predictable. A real server is not - it may not even be available when you run your tests.
Things you do on a mock don't influence the outside world. You don't want to change data or crash the server by running your tests.
A test with mocks is faster. No connection to the server or real database queries have to be made.
That being said, automated integration tests which include a real server are also very useful. You just have to keep in mind that they will have lower code coverage, will be more fragile, and will be more expensive to create/run/maintain. Keep your unit tests and your integration tests separate.
edit: some collaborator objects like your Cache object may also be very unit-test friendly. If they have the same advantages as that of a mock that I list above, then you don't need to create a mock. For example, you typically don't need to mock a collection.
I am new to unit testing, and I continously hear the words 'mock objects' thrown around a lot. In layman's terms, can someone explain what mock objects are, and what they are typically used for when writing unit tests?
Since you say you are new to unit testing and asked for mock objects in "layman's terms", I'll try a layman's example.
Unit Testing
Imagine unit testing for this system:
cook <- waiter <- customer
It's generally easy to envision testing a low-level component like the cook:
cook <- test driver
The test driver simply orders different dishes and verifies the cook returns the correct dish for each order.
Its harder to test a middle component, like the waiter, that utilizes the behavior of other components. A naive tester might test the waiter component the same way we tested the cook component:
cook <- waiter <- test driver
The test driver would order different dishes and ensure the waiter returns the correct dish. Unfortunately, that means that this test of the waiter component may be dependent on the correct behavior of the cook component. This dependency is even worse if the cook component has any test-unfriendly characteristics, like non-deterministic behavior (the menu includes chef's surprise as an dish), lots of dependencies (cook won't cook without his entire staff), or lot of resources (some dishes require expensive ingredients or take an hour to cook).
Since this is a waiter test, ideally, we want to test just the waiter, not the cook. Specifically, we want to make sure the waiter conveys the customer's order to the cook correctly and delivers the cook's food to the customer correctly.
Unit testing means testing units independently, so a better approach would be to isolate the component under test (the waiter) using what Fowler calls test doubles (dummies, stubs, fakes, mocks).
-----------------------
| |
v |
test cook <- waiter <- test driver
Here, the test cook is "in cahoots" with the test driver. Ideally, the system under test is designed so that the test cook can be easily substituted (injected) to work with the waiter without changing production code (e.g. without changing the waiter code).
Mock Objects
Now, the test cook (test double) could be implemented different ways:
a fake cook - a someone pretending to be a cook by using frozen dinners and a microwave,
a stub cook - a hot dog vendor that always gives you hot dogs no matter what you order, or
a mock cook - an undercover cop following a script pretending to be a cook in a sting operation.
See Fowler's article for the more specifics about fakes vs stubs vs mocks vs dummies, but for now, let's focus on a mock cook.
-----------------------
| |
v |
mock cook <- waiter <- test driver
A big part of unit testing the waiter component focuses on how the waiter interacts with the cook component . A mock-based approach focuses on fully specifying what the correct interaction is and detecting when it goes awry.
The mock object knows in advance what is supposed to happen during the test (e.g. which of its methods calls will be invoked, etc.) and the mock object knows how it is supposed to react (e.g. what return value to provide). The mock will indicate whether what really happens differs from what is supposed to happen. A custom mock object could be created from scratch for each test case to execute the expected behavior for that test case, but a mocking framework strives to allow such a behavior specification to be clearly and easily indicated directly in the test case.
The conversation surrounding a mock-based test might look like this:
test driver to mock cook: expect a hot dog order and give him this dummy hot dog in response
test driver (posing as customer) to waiter: I would like a hot dog please
waiter to mock cook: 1 hot dog please
mock cook to waiter: order up: 1 hot dog ready (gives dummy hot dog to waiter)
waiter to test driver: here is your hot dog (gives dummy hot dog to test driver)
test driver: TEST SUCCEEDED!
But since our waiter is new, this is what could happen:
test driver to mock cook: expect a hot dog order and give him this dummy hot dog in response
test driver (posing as customer) to waiter: I would like a hot dog please
waiter to mock cook: 1 hamburger please
mock cook stops the test: I was told to expect a hot dog order!
test driver notes the problem: TEST FAILED! - the waiter changed the order
or
test driver to mock cook: expect a hot dog order and give him this dummy hot dog in response
test driver (posing as customer) to waiter: I would like a hot dog please
waiter to mock cook: 1 hot dog please
mock cook to waiter: order up: 1 hot dog ready (gives dummy hot dog to waiter)
waiter to test driver: here is your french fries (gives french fries from some other order to test driver)
test driver notes the unexpected french fries: TEST FAILED! the waiter gave back wrong dish
It may be hard to clearly see the difference between mock objects and stubs without a contrasting stub-based example to go with this, but this answer is way too long already :-)
Also note that this is a pretty simplistic example and that mocking frameworks allow for some pretty sophisticated specifications of expected behavior from components to support comprehensive tests. There's plenty of material on mock objects and mocking frameworks for more information.
A Mock Object is an object that substitutes for a real object. In object-oriented programming, mock objects are simulated objects that mimic the behavior of real objects in controlled ways.
A computer programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts.
http://en.wikipedia.org/wiki/Mock_object
Mock objects allow you to set up test scenarios without bringing to bear large, unwieldy resources such as databases. Instead of calling a database for testing, you can simulate your database using a mock object in your unit tests. This frees you from the burden of having to set up and tear down a real database, just to test a single method in your class.
The word "Mock" is sometimes erroneously used interchangeably with "Stub." The differences between the two words are described here. Essentially, a mock is a stub object that also includes the expectations (i.e. "assertions") for proper behavior of the object/method under test.
For example:
class OrderInteractionTester...
public void testOrderSendsMailIfUnfilled() {
Order order = new Order(TALISKER, 51);
Mock warehouse = mock(Warehouse.class);
Mock mailer = mock(MailService.class);
order.setMailer((MailService) mailer.proxy());
mailer.expects(once()).method("send");
warehouse.expects(once()).method("hasInventory")
.withAnyArguments()
.will(returnValue(false));
order.fill((Warehouse) warehouse.proxy());
}
}
Notice that the warehouse and mailer mock objects are programmed with the expected results.
Mock objects are simulated objects that mimic the behavior of the real ones. Typically you write a mock object if:
The real object is too complex to incorporate it in a unit testing (For
example a networking communication,
you can have a mock object that
simulate been the other peer)
The result of your object is non
deterministic
The real object is not yet available
A Mock object is one kind of a Test Double. You are using mockobjects to test and verify the protocol/interaction of the class under test with other classes.
Typically you will kind of 'program' or 'record' expectations : method calls you expect your class to do to an underlying object.
Let's say for example we are testing a service method to update a field in a Widget. And that in your architecture there is a WidgetDAO which deals with the database. Talking with the database is slow and setting it up and cleaning afterwards is complicated, so we will mock out the WidgetDao.
let's think what the service must do : it should get a Widget from the database, do something with it and save it again.
So in pseudo-language with a pseudo-mock library we would have something like :
Widget sampleWidget = new Widget();
WidgetDao mock = createMock(WidgetDao.class);
WidgetService svc = new WidgetService(mock);
// record expected calls on the dao
expect(mock.getById(id)).andReturn(sampleWidget);
expect(mock.save(sampleWidget);
// turn the dao in replay mode
replay(mock);
svc.updateWidgetPrice(id,newPrice);
verify(mock); // verify the expected calls were made
assertEquals(newPrice,sampleWidget.getPrice());
In this way we can easily test drive development of classes which depend on other classes.
I highly recommend a great article by Martin Fowler explaining what exactly are mocks and how they differ from stubs.
When unit testing some part of a computer program you ideally want to test just the behavior of that particular part.
For example, look at the pseudo-code below from an imaginary piece of a program that uses another program to call print something:
If theUserIsFred then
Call Printer(HelloFred)
Else
Call Printer(YouAreNotFred)
End
If you were testing this, you would mainly want to test the part that looks at if the user is Fred or not. You don't really want to test the Printer part of things. That would be another test.
This is where Mock objects come in. They pretend to be other types of things. In this case you would use a Mock Printer so that it would act just like a real printer, but wouldn't do inconvenient things like printing.
There are several other types of pretend objects that you can use that aren't Mocks. The main thing that makes Mocks Mocks is that they can be configured with behaviors and with expectations.
Expectations allow your Mock to raise an error when it is used incorrectly. So in the above example, you could want to be sure that the Printer is called with HelloFred in the "user is Fred" test case. If that doesn't happen your Mock can warn you.
Behavior in Mocks means that for example, of your code did something like:
If Call Printer(HelloFred) Returned SaidHello Then
Do Something
End
Now you want to test what your code does when the Printer is called and returns SaidHello, so you can set up the Mock to return SaidHello when it is called with HelloFred.
One good resource around this is Martin Fowlers post Mocks Aren't Stubs
Mock and stub objects are a crucial part of unit testing. In fact they go a long way to make sure you are testing units, rather than groups of units.
In a nutshell, you use stubs to break SUT's (System Under Test) dependency on other objects and mocks to do that and verify that SUT called certain methods/properties on the dependency. This goes back to fundamental principles of unit testing - that the tests should be easily readable, fast and not requiring configuration, which using all real classes might imply.
Generally, you can have more than one stub in your test, but you should only have one mock. This is because mock's purpose is to verify behavior and your test should only test one thing.
Simple scenario using C# and Moq:
public interface IInput {
object Read();
}
public interface IOutput {
void Write(object data);
}
class SUT {
IInput input;
IOutput output;
public SUT (IInput input, IOutput output) {
this.input = input;
this.output = output;
}
void ReadAndWrite() {
var data = input.Read();
output.Write(data);
}
}
[TestMethod]
public void ReadAndWriteShouldWriteSameObjectAsRead() {
//we want to verify that SUT writes to the output interface
//input is a stub, since we don't record any expectations
Mock<IInput> input = new Mock<IInput>();
//output is a mock, because we want to verify some behavior on it.
Mock<IOutput> output = new Mock<IOutput>();
var data = new object();
input.Setup(i=>i.Read()).Returns(data);
var sut = new SUT(input.Object, output.Object);
//calling verify on a mock object makes the object a mock, with respect to method being verified.
output.Verify(o=>o.Write(data));
}
In the above example I used Moq to demonstrate stubs and mocks. Moq uses the same class for both - Mock<T> which makes it a bit confusing. Regardless, at runtime, the test will fail if output.Write is not called with data as parameter, whereas failure to call input.Read() will not fail it.
As another answer suggested via a link to "Mocks Aren't Stubs", mocks are a form of "test double" to use in lieu of a real object. What makes them different from other forms of test doubles, such as stub objects, is that other test doubles offer state verification (and optionally simulation) whereas mocks offer behavior verification (and optionally simulation).
With a stub, you might call several methods on the stub in any order (or even repitiously) and determine success if the stub has captured a value or state you intended. In contrast, a mock object expects very specific functions to be called, in a specific order, and even a specific number of times. The test with a mock object will be considered "failed" simply because the methods were invoked in a different sequence or count - even if the mock object had the correct state when the test concluded!
In this way, mock objects are often considered more tightly coupled to the SUT code than stub objects. That can be a good or bad thing, depending on what you are trying to verify.
Part of the point of using mock objects is that they don't have to be really implemented according to spec. They can just give dummy responses. E.g. if you have to implement components A and B, and both "call" (interact with) each other, then you can't test A until B is implemented, and vice versa. In test-driven development, this is a problem. So you create mock ("dummy") objects for A and B, that are very simple, but they give some sort of response when they're interacted with. That way, you can implement and test A using a mock object for B.
For php and phpunit is well explained in phpunit documentaion. see here
phpunit documentation
In simple word mocking object is just dummy object of your original one and is return its return value, this return value can be used in the test class
It's one of the main perspectives of unit tests. yes, you're trying to test your single unit of code and your test results should not be relevant to other beans or objects behavior. so you should mock them by using Mock objects with some simplified corresponding response.