Fake objects vs Mock objects - unit-testing

In the TDD there is two concept: fake objects and mock objects. These two concepts are used in case a class you want to test is interacting with other classes or objects or databases...
My question is : what is the difference between the two? and when can I use each one of them?
Edit:
I found this answer:
What's the difference between faking, mocking, and stubbing?
But I'm still confused about the difference between the two:
both of them create implementation of the components, with light implementation for a Fake. But, what do they mean by "light implementation" of "shortcut" in case of a fake?
And what is the difference between how a Mock object works, and the real object work?

A fake implementation for a DataSet for instance, would simply return a static set of data. A mock would pretty much be a full implementation that would be able to generate various sets of data depending upon input. If you were mocking away your data layer, you would be able to execute your command objects against the mock and it would be robust enough to return data with a "valid" statement, or throw exceptions with an invalid statement. All without actually connecting to a database or file.
As for the differences between mock and real, usually when mocking a class away, you would create a factory object that, by default, returns the real object, but when you write your tests, you can tell it to return a specific mock class. The mock class either implements the same interfaces as the real object, or you use a wrapper class that mimics the underlying real class, but allows for dependency injection for the critical parts to generate data without making the external calls.

Related

Mocking a file write process in googlemock

I am just starting with mocking using googlemock for a C++ project. In my case, my class to be tested observes a file that is written to, and whenever a minimal amount of new data has been written, it starts doing some work.
What I need is a mock class for the process writing to the file. As far as I understand things, I need to completely implement this "writing to file" functionality in form of (for googlemock) a virtual class from which a mock class is derived? The mock wrapper is finally used for testing and evaluation purposes, right?
Thanks for help!
Mocks, in google mock terms, are objects used to validate that your code under test performs certain operations on them.
What you describe is not a mock, but a utility class that triggers your code under test operations.
What does your class do when it detects that the file it observes is written to? If, for instance, it performs a call to another object, then you could use a mock object to check that it gets called with the right parameters, e.g. the new bulk of data written to the file.
I am assuming that an object of your "observer" class is notified
that minimal amount of data has been written by an object of
the "writter" class. In that case, you need to implement an abstract
class that represents an interface for your "writter" class, and have
your real "writter" class inherit from it and override its virtual functions.
Also, have your mock "writter" class implementation inherit from this interface and
and create mock implementations using MOCK_METHODn.
Then, have your "observer" class receive notifications from "writter" object
using a pointer to the abstract class. This way, you can use dependency
injection to switch implementation during testing phase by creating a mock
"writter" object and passing its address to "observer" object (instead of an address to a real "writter"
object) and setup test cases using EXPECT_CALL on the mock object.
This is the best advice I can give since you did not provide us with a detailed description of your classes.
EDIT:
Concerning the implementation of your real "writter" class: You do not have to create it immediately, you can use the mock class for now to test the behavior of the "observer" class and leave the implementation for later. You will, of course, have to implement it eventually since it has to be used in production code.

How to verify method class in test class

I have a repository with a method called ConvertToBusinessEntity which converts the data from the data source to a business object. This method is supposed to be used by other methods in the repository namely the Get, GetAll, etc.
This method is unit tested. I check if the data is being retrieved correctly from the data source and the values are being put in the entity correct properties.
But now I need to test and create the GetEntity method which is supposed to call ConvertToBusinessEntity. The logic behind ConvertToBusiness is tested. I just need to know how to verify that a method in the class being tested is called (not a dependency).
Does anyone know how to do this or any alternative method?
I thought of an alternative method but I am not sure if it's the best. I could extend the class under test and have a counter increasing each time the method is called.
Another one would be to mock the class itself.
What do you think? Any other suggestions?
Having ConvertToBusinessEntity in repository is not very good idea. Responsibility of repository is working with data store (CRUD). Mapping of data types is responsibility of some mapper class. Otherwise your repository has too many responsibilities. Take a look on what you are trying to test:
I check if the data is being retrieved correctly from the data source
and the values are being put in the entity correct properties
You see this and? Your test can fail on two completely different reasons. Also you should change repository on two completely different reasons. Best approach here is persisting business entities directly. Modern ORMs allows doing that without polluting business entity with attributes or forcing it inheriting some data-access specific class.
If you really want to have data mapping logic in repository, then make it private (actually only repository should require conversion of business entity to some data-access object) and don't care how this logic is implemented. Let it be part of internal class implementation. You should care only about repository being able to accept or return filled business entities - that's the responsibility of repository. It doesn't matter how mapping is implemented in repository. You should test what repository does, instead of how. So just check that expected business objects are returned by repository.
I just need to know how to verify that a method in the class being tested is called (not a dependency).
But do you really need to do that? If your GetEntity method operates correctly, do you really care how it operates? Do you really care if it performs its function by delegating to ConvertToBusiness, or by some other means?
I recommend instead that you
Think of each method as having a specification.
That specification describes what the outputs and publicly visible manipulations it must make. That do not describe how a method performs its function; that is an implementation detail that could change.
Your unit tests check only that your methods conform to their specification.
You might nevertheless use your knowledge about the implementation to choose good test cases.
But, you might declare, if I do that I am not unit testing my method code; my test of GetEntity depends on both the GetEntity method and the ConvertToBusiness method: two units, so an integration test rather than a unit test. But do you mock the methods of the runtime environment? Of course not. The line between unit and integration testing is not so clear.
More philosophically, you can not create good mock objects in many cases. The reason is that, for most methods, the manner in which an object delegates to associated objects is undefined. Whether it does delegate, and how, is left by the specification as an implementation detail. The only requirement is that, on delegating, the method satisfies the preconditions of its delegate. In such a situation, only a fully functional (non-mock) delegate will do. If the real object checks its preconditions, failure to satisfy a precondition on delegating will cause a test failure. And debugging that test failure will be easy.

When to use strict mocks?

I am trying to come up with scenario in which one should use strict mocks. I can't think of any.
When do you use strict mocks and why?
Normal (or loose) mocks are used when you want to verify that an expected method has been called with the proper parameters.
Strict mocks are used to verify that only the expected methods have been called and no other. Think of them as a kind of negative test.
In most cases, having strict mocks makes your unit tests very fragile. Tests start failing even if you make a small internal implementation change.
But let me give you an example where they may be useful - testing a requirement such as:
"A Get on a cache should not hit the database if it already contains data".
There are ways to achieve this with loose mocks, but instead, it is very convenient to simply set up a strict Mock<Database> with zero expected function calls. Any call to this database will then throw an exception and fail the test.
Another scenario where you would want to use strict mocks is in an Adapter or Wrapper design pattern. In this pattern, you are not executing much business logic. The major part of testing these classes is whether the underlying functions have been called with the correct parameters (and no other). Strict mocks work fairly well in this case.
I have a simple convention:
Use strict mocks when the system under test (SUT) is delegating the call to the underlying mocked layer without really modifying or applying any business logic to the arguments passed to itself.
Use loose mocks when the SUT applies business logic to the arguments passed to itself and passes on some derived/modified values to the mocked layer.
For eg: Lets say we have database provider StudentDAL which has two methods:
Data access interface looks something like below:
Student GetStudentById(int id);
IList<Student> GetStudents(int ageFilter, int classId);
The implementation which consumes this DAL looks like below:
public Student FindStudent(int id)
{
//StudentDAL dependency injected
return StudentDAL.GetStudentById(id);
//Use strict mock to test this
}
public IList<Student> GetStudentsForClass(StudentListRequest studentListRequest)
{
//StudentDAL dependency injected
//age filter is derived from the request and then passed on to the underlying layer
int ageFilter = DateTime.Now.Year - studentListRequest.DateOfBirthFilter.Year;
return StudentDAL.GetStudents(ageFilter , studentListRequest.ClassId)
//Use loose mock and use verify api of MOQ to make sure that the age filter is correctly passed on.
}

unit testing data storage

Suppose I have an interface with methods 'storeData(key, data)' and 'getData(key)'. How should I test a concrete implementation? Should I check if the data was correctly set in the storage medium (eg an sql database) or should I just check whether or not it gives the correct data back by using getData?
If I look up the data in the database it feels like I'm also testing the internals of the method but only checking whether it gives the same data back feels incomplete.
You seem to be caught up in the hype of unit testing, what you will be doing is actually an integration test. Setting and getting back the same value from the same key is a unit test you'd do with a mock implementation of the storage engine, but actually testing the real storage, say your database, as you should, that is no longer a unit test, but it is a fundamental part of testing, and it sounds like integration testing to me. Don't use unit testing as your hammer, choose the right tools for the right job. Divide your testing into more layers.
What you want to do in a unit test is make sure that the method does the job that it is supposed to do. If the method uses dependencies to accomplish it's work, you would mock those dependencies out and make sure that your method calls the methods on the objects it depends on with the appropriate arguments. This way you test your code in isolation.
One of the benefits to this is that it will drive the design of your code in a better direction. In order to use mocking, for example, you naturally gravitate towards more decoupled code using dependency injection. This gives you the ability to easily substitute your mock objects for the actual objects that your class depends on. You also end up implementing interfaces, which are more naturally mocked. Both of these things are good design patterns and will improve your code.
In order to test your particular example, for instance, you might have your class depend on a factory to create connections to the database and a builder to construct parameterized SQL commands that are executed via the connection. You'd pass these mocked versions of these objects to your class and ensure that the correct methods to set up the connection and command, build the correct command, execute it, and tear down the connection were invoked. Or perhaps, you inject an already open connection and simply build the command and invoke it. The point is your class is built against an interface or set of interfaces and you use mocking to supply objects that implement those interfaces and can record invocations and supply correct return values to the methods that you expect to use from the interface(s).
In cases like this I will usually create SetUp and TearDown methods that fire before/after my unit tests. These methods will set up any test data I need in the db and delete any test data when I'm done. Pseudo code example:
Const KEY1 = "somekey"
Const VALUE1= "somevalue"
Const KEY2 = "somekey2"
Const VALUE2= "somevalue2"
Sub SetUpUnitTests()
{
Insert Into SQLTable(KEY1,VALUE1)
}
//this test is not dependent on the setData Method
Sub GetDataTest()
{
Assert.IsEqual(getData(KEY1),VALUE1)
}
//this test is not dependent on getData Method
Sub SetDataTest()
{
storeData(newKey,NewData)
Assert.IsNotNull(Direct Call to SQL [Select data from table where key=KEY2])
}
Sub TearDownUnitTests()
{
Delete From table Where key in (KEY1, KEY2)
}
Testing both in concert is a common technique (at least, in my experience), and I wouldn't shy away from it. I've used this same pattern for serializing/deserializing and parsing and printing.
If you don't want to hit the database, you could use a database mock. Some people have the same feelings as you when using mocks - it is partly implementation specific. As in all things, it's a trade-off: consider the benefits of mocking (faster, not db dependent) vs its downsides (won't detect actual db problems, slower).
I think it depends on what happens to the data later - if you're only ever going to access the data using storeData and getData, why not test the methods in concert? I suppose there's a chance that a bug will arise and it'll be slightly harder to figure out whether it's in storeData or getData, but I'd consider that an acceptable risk if it
makes your test easier to implement, and
conceals the internals, as you say
If the data will be read from, or inserted into, the database using some other mechanism, then I'd check the database using SQL as you suggest.
#brendan makes a good point, though - whichever method you decide on, you'll be inserting data in the database. It's a good idea to clear out the data before and after the tests to ensure that you can achieve consistent results.

What is an ObjectMother?

What is an ObjectMother and what are common usage scenarios for this pattern?
ObjectMother starts with the factory pattern, by delivering prefabricated test-ready objects via a simple method call. It moves beyond the realm of the factory by
facilitating the customization of created objects,
providing methods to update the objects during the tests, and
if necessary, deleting the object from the database at the completion of the test.
Some reasons to use ObjectMother:
* Reduce code duplication in tests, increasing test maintainability
* Make test objects super-easily accessible, encouraging developers to write more tests.
* Every test runs with fresh data.
* Tests always clean up after themselves.
(http://c2.com/cgi/wiki?ObjectMother)
See "Test Data Builders: an alternative to the Object Mother pattern" for an argument of why to use a Test Data Builder instead of an Object Mother. It explains what both are.
As stated elsewhere, ObjectMother is a Factory for generating Objects typically (exclusively?) for use in Unit Tests.
Where they are of great use is for generating complex objects where the data is of no particular significance to the test.
Where you might have created an empty instance below such as
Order rubishOrder = new Order("NoPropertiesSet");
_orderProcessor.Process(rubishOrder);
you would use a sensible one from the ObjectMother
Order motherOrder = ObjectMother.SimpleOrder();
_orderProcessor.Process(motherOrder);
This tends to help with situations where the class being tested starts to rely on a sensible object being passed in.
For instance if you added some OrderNumber validation to the Order class above, you would simply need to instantiate the OrderNumber on the SimpleObject class in order for all the existing tests to pass, leaving you to concentrate on writing the validation tests.
If you had just instantiated the object in the test you would need to add it to every test (it is shocking how often I have seen people do this).
Of course, this could just be extracted out to a method, but putting it in a separate class allows it to be shared between multiple test classes.
Another recommended behavior is to use good descriptive names for your methods, to promote reuse. It is all too easy to end up with one object per test, which is definitely to be avoided. It is better to generate objects that represent general rather than specific attributes and then customize for your test. For instance ObjectMother.WealthyCustomer() rather than ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorsche() and ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorscheAndAFerrari()