since i do not get my head around the unit tests.
I have a service
public interface IMyService
{
public List<Person> GetRandomPeople();
}
And then in my MVC Project I have the implementation of this service
public MyService : IMyService
{
public List<Person> GetRandomPeople();
{
...the actual logic the get the People here. This is what i want to test ?
}
}
Then in my controller
public HomeController : Controller
{
IMyService _myService;
public HomeController(IMyService myService)
{
_myService = myService
}
}
Then in the Action Method I will use
public ActionResult CreateRandom()
{
List<Person> people = _myService.GetRandomPeople();
return View(people)
}
Note : The person repo is in my service, just typed this out quickly, so i have a repo.
My Question : How would i test my Service implementation in my Tests projects. I am really stuck now, and i think this is going to the "light goes on" moment in my TDD future.
thanks in advance.
The point of injecting the service interface into the controller is to allow you to test the controller in isolation. To test that, you pass in a mock or fake IMyService.
Depending on how your service is implemented, you may need to test your service with integration tests. Those should be separate from your unit tests (as you don't want to run them continuously).
For example, let's say IMyService is implemented with Entity Framework. You need to actually run the LINQ to Entities against a database to test it. You could use a local database, you could have EF create and populate a database on the fly, etc. Those aren't unit tests (they use I/O), but they're still important.
Other persistence frameworks may permit you to test against in-memory data sets. That's fine; I'd still consider that an integration test (your code + the framework) and would separate it from your unit tests.
The trick is to keep the business logic out of the service implementation. Restrict that (as much as possible) to pure data-access code. You want to test your business logic with unit tests.
EDIT:
To address the question in the comment ("when you need to create stubs"):
You create stubs, fakes, test doubles, or mocks (there's a lot of terminology) when you have a class that you want to test in isolation (the system under test, or SUT) and that class has injected dependencies.
Those dependencies are going to be abstract in some way - interfaces, abstract classes, classes with virtual methods, delegates, etc.
Your production code will pass concrete implementations in (classes that hit the database, for instance). Your test code will pass test implementations in.
You could pass a simple stub implementation (just write a dummy class in your test project that implements the interface and returns fixed values for its members), or you could have a fancier implementation that will detect what calls are made and what arguments are passed (a mock object). You can write all of this by hand. It gets tedious, so many testers use mock object frameworks (also known as isolation frameworks). There are many of these (often several for any given language); I tend to use either NSubstitute or Moq in .NET.
Learning to write tests takes time. It helped me a lot to read several books on the subject. Blog posts may not give enough background. There are some excellent suggestions in the (sadly closed) question here.
The short answer is that you create stubs or mocks when your tests require them.
Related
I'm curious as to what method people like to use for mocking and why. The two methods that I know of are using hard coded mock objects and a mocking framework. To demonstrate, I'll outline an example using C#.
Suppose we have an IEmployeeRepository interface with a method called GetEmployeeById.
public interface IEmployeeRepository
{
Employee GetEmployeeById(long id);
}
We can easily create a mock of this:
public class MockEmployeeRepository : IEmployeeRepository
{
public Employee GetEmployeeById(long id)
{
Employee employee = new Employee();
employee.FirstName = "First";
employee.LastName = "Last";
...
return employee;
}
}
Then, in our tests we can explicitly tell our services to use the MockEmployeeRepository, either using a setter or dependency injection. I'm new to mocking frameworks so I'm curious as to why we use them, if we can just do the above?
That's not a Mock, it's a Stub. For stubbing, your example is perfectly acceptable.
From Martin Fowler:
Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.
When you're mocking something, you usually call a "Verify" method.
Look at this for the diff between Mocks and Stubs
http://martinfowler.com/articles/mocksArentStubs.html
I think the choice between writing dummy objects by hand or by using a framework depends a lot upon the types of components that you are testing.
If it is part of the contract for the component under test to communicate with its collaborators following a precise protocol, then instrumented dummy objects ("Mocks") are just the thing to use. It is frequently much easier to test such protocols using a mocking framework than by hand-coding. Consider a component that is required to open a repository, perform some reads and writes in a prescribed order, and then close the repository -- even in the face of an exception. A mocking framework would make it easier to set up all of the necessary tests. Applications related to telecommunications and process control (to pick a couple of random examples) are full of components that need to be tested in this fashion.
On the other hand, many components in general business applications have no particular constraints on how they communicate with their collaborators. Consider a component that performs some kind of analysis of, say, university course loads. The component needs to retrieve instructor, student and course information from a repository. But it does not matter what order it retrieves the data: instructor-student-course, student-course-instructor, all-at-once, or whatever. There is no need to test for and enforce a data access pattern. Indeed, it would likely be harmful to test that pattern as it would be demanding a particular implementation unnecessarily. In this context, simple uninstrumented dummy objects ("Stubs") are adequate and a mocking framework is probably overkill.
I should point out that even when stubbing, a framework can still make your life a lot easier. One doesn't always have the luxury of dictating the signatures of one's collaborators. Imagine unit-testing a component that is required to process data retrieved from a thick interface like IDataReader or ResultSet. Hand-stubbing such interfaces is unpleasant at best -- especially if the component under test only actually uses three of the umpteen methods in the interface.
For me, the projects that have required mocking frameworks were almost invariably of a systems-programming nature (e.g. database or web infrastructure projects, or low-level plumbing in a business application). For applications-programming projects, my experience has been that there were few mocks in sight.
Given that we always strive to hide the messy low-level infrastructure details as much as possible, it would seem that we should aim to have the simple stubs far outnumber the mocks.
Some distinguish between mocks and stubs. A mock object may verify that it has been interacted with in the expected way. A mocking framework can make it easy to generate mocks and stubs.
In your example, you've stubbed out a single method in an interface. Consider an interface with n methods, where n can change over time. A hand-stubbed implementation may require more code and more maintenance.
A mocked interface can have different outputs per test - One test you may have a method return null, another test has the method return an object, another test has the method throw an exception. This is all configured in the unit test, whereas your version would require several hand-written objects.
Psuedocode:
//Unit Test One
MockObject.Expect(m => m.GetData()).Return(null);
//Unit Test Two
MockObject.Expect(m => m.GetData()).Return(new MyClass());
//Unit Test Three
MockObject.Expect(m => m.GetData()).ThrowException();
I tend to write stubs and mocks by hand, first. Then if it can be easily expressed using a mock object framework, I rewrite it so that I have less code to maintain.
I have been writing them by hand. I was having trouble using Moq, but then I read TDD: Introduction to Moq, and I think I get what they say about classical vs. mockist approaches now. I'll be giving Moq another try this evening, and I think understanding the "mockist" approach will give me what I need to make Moq work better for me.
First of all let me say I am working from legacy code. So some changes can be made but not drastic ones.
My problem is I have a "Vehicle" object, it is pretty simple but has no interfaces or anything on it. This project was created before TDD really started to become more main stream. Anyway, I need to add a new method to change the starting mileage of the vehicle. I thought this would be a good start to try TDD and Mocking as I am new it all. My problem is I need to create a vehicle do some checks which involve going to the database. Sorry if my question is not 100% clear, is why I am posting as I am a bit confused where Rhino Mocks fits in (and if I need it!).
The problem is dependencies. Your vehicle class depends on the database. Hopefully all the interactions with the database have been encapsulated into a nice class we will get back to that in a second. When you fire off your unit test you want to be able to test the vehicle class without having to care about the database. For example you want to check that your SpeedUp(int x) method really increases the total speed by x. In this method the first thing it does is it asks the DB for its current speed. That means that you have to have a DB to test! Dam, that doesn't sound like a very fast test nor repeatable. Also a lot of setup to just run a test.
Wouldn't it be great if we could have a pretend DB? That is where mocking comes in. We create a Mock of the class that has all the DB interaction encapsulated. We then setup the mock to respond with a prepgrammed value. So for example when we ask the DB for current speed you return 100.
So now when we test the mock returns a 100 and we can assert that SpeedUp(int x) takes 100 and adds x to it.
Rhino mocks can only create mocks from interfaces or abstract classes, which do not exist your legacy code.
TypeMock can mock anything but isn't free.
You could use Microsoft Moles to mock these out.
You should however take into account that Moles should be your last resort solution, it's better to refactor your code and make it testable by abstracting your datalayer from your business layer.
HTH
Is it easy to create an instance of the Vehicle type (an object) and then invoke your method for a test ? If yes, then chances are you don't need a mock.
However if your Vehicle type has dependencies (like a Database access object) that it needs to perform the action that you want to test, then you would like to use a mock database access object that returns canned values for the test since you want your unit tests to run fast.
Vehicle [depends On>] OwnerRepository [satisfied By] SQLOwnerRepository
So you introduce an interface (an OwnerRepository to get details of the owner, let's say) to separate the DB Interaction (define a contract) between the two. Make your real dependency (here SQLOwnerRepository) implement this interface. Also design your code such that dependencies can be injected e.g.
public Vehicle (OwnerRepository ownerRepository)
{ _ownerRepository = ownerRepository; // cache in member variable }
Now in the test code,
Vehicle [depends On >] OwnerRepository [satisifed By] MockOwnerRepository
You have frameworks that given an interface would create a mock implementation of it (See Rhino/Moq frameworks). So you no longer need an actual DB connection to test your Vehicle class. Mocks are used to abstract away time consuming / uncontrollable dependencies in order to keep your unit tests fast / predictable.
I'd recommend reading up on "Dependency Injection" to have a better understanding of when and why to use mocks.
Not a direct answer to your question. However it is worth to take a look at the following.
Gabriel Schenker has posted about applying TDD in legacy systems. PTOM – Brownfield development – Making your dependencies explicit
This article explains about making dependencies explicit and using Dependency Injection. It also tell s about Poor Man’s Dependency Injection. This is needed when there is only default constructor.
Something like
public OrderService() : this(
new OrderRepository(),
new EmailSender(ConfigurationManager.AppSettings["SMTPServer"])
)
The article also deals with creating a wrapper for ConfigurationManager for making it testable.
During a recent interview I was asked why one would want to create mock objects. My answer went something like, "Take a database--if you're writing test code, you may not want that test hooked up live to the production database where actual operations will be performed."
Judging by response, my answer clearly was not what the interviewer was looking for. What's a better answer?
I'd summarize like this:
Isolation - You can test only a method, independently on what it calls. Your test becomes a real unit test (most important IMHO)
Decrease test development time - it is usually faster to use a mock then to create a whole class just for help you test
It lets you test even when you don't have implemented all dependencies - You don't even need to create, for instance, your repository class, and you'll be able to test a class that would use this repository
Keeps you away from external resources - helps in the sense you don't need to access databases, or to call web services, or to read files, or to send emails, or to charge a credit card, and so on...
In an interview, I'd recommend including that mocking is even better when developers use dependency injection, once it allows you to have more control, and build tests more easily.
When unit testing, each test is designed to test a single object. However most objects in a system will have other objects that they interact with. Mock Objects are dummy implementations of these other objects, used to isolate the object under test.
The benefit of this is that any unit tests that fail generally isolate the problem to the object under test. In some cases the problem will be with the mock object, but those problems should be simpler to identify and fix.
It might be an idea to write some simple unit tests for the mock objects as well.
They are commonly used to create a mock data access layer so that unit tests can be run in isolation from the data store.
Other uses might be to mock the user interface when testing the controller object in the MVC pattern. This allows better automated testing of UI components that can somewhat simulate user interaction.
An example:
public interface IPersonDAO
{
Person FindById(int id);
int Count();
}
public class MockPersonDAO : IPersonDAO
{
// public so the set of people can be loaded by the unit test
public Dictionary<int, Person> _DataStore;
public MockPersonDAO()
{
_DataStore = new Dictionary<int, Person>();
}
public Person FindById(int id)
{
return _DataStore[id];
}
public int Count()
{
return _DataStore.Count;
}
}
Just to add on to the fine answers here, mock objects are used in top-down or bottom-up structural programming (OOP too). They are there to provide data to upper-level modules (GUI, logic processing) or to act as out mock output.
Consider top-down approach: you develop a GUI first, but a GUI ought to have data. So you create a mock database which just return a std::vector<> of data. You have defined the 'contract' of the relationship. Who cares what goes on inside the database object - as long as my GUI list get a std::vector<> I'm happy. This can go to provide mock user login information, whatever you need to get the GUI working.
Consider a bottom-up approach. You wrote a parser which reads in delimited text files. How do you know if it is working? You write a mock 'data-sink' for those object and route the data there to verify (though usually) that the data are read correctly. The module on the next level up may require 2 data sources, but you have only wrote one.
And while defining the mock objects, you have also define the contract of how the relationship. This is often used in test-driven programming. You write the test cases, use the mock objects to get it working, and often than not, the mock object's interface becomes the final interface (which is why at some point you may want to separate out the mock object's interface into pure abstract class).
Hope this helps
Mock objects/functions can also be useful when working in a team. If you're working on a part of the code base that depends on a different part of the code base that some else is responsible for - which is still being written or hasn't been written yet - a mock object/function is useful in giving you an expected output so that you can carry on with you're work without being held up waiting for the other person to finish their part.
Here are the a few situations where mocking is indispensable:
When you are testing GUI interaction
When you are testing Web App
When you are testing the code that interacts with hardware
When you are testing legacy apps
I will go a different direction here. Stubbing/Faking does all of the things mentioned above, but perhaps the interviewers were thinking of mocks as a fake object that causes the test to pass or fail. I am basing this on the xUnit terminology. This could have lead to some discussion about state testing verses behavior / interaction testing.
The answer they may have been looking for is: That a mock object is different than a stub. A stub emulates a dependency for the method under test. A stub shouldn't cause a test to fail. A mock does this and also checks how and when it is called. Mocks cause a test to pass or fail based on underlying behavior. This has the benefit of being less reliant on data during a test, but tying it more closely to the implementation of the method.
Of course this is speculation, it is more likely they just wanted you to describe the benefits of stubbing and DI.
To take a slightly different approach (as I think mocks have been nicely covered above):
"Take a database--if you're writing test code, you may not want that test hooked up live to the production database where actual operations will be performed."
IMO a bad way of stating the example use. You would never "hook it up to the prod database" during testing with or without mocks. Every developer should have a developer-local database to test against. And then you would move on test environments database then maybe UAT and finally prod. You are not mocking to avoid using the live database, you are mocking in order that classes that are not directly dependent on a database do not require you to set up a database.
Finally (and I accept I might get some comments on this) IMO the developer local database is a valid thing to hit during unit tests. You should only be hitting it while testing code that directly interacts with the database and using mocks when you are testing code that indirectly access the database.
I have written my units tests, and where external resources are needed it is dealt with by using fakes.
All is good so far. Now i' am faced with the other test phases, mainly integration where i want to repeat the unit test methods against real external resources e.g The Database.
So, What are the recommendations for structuring test projects for Unit Vs Integration testing? I understand some people prefer separate assemblies for unit and Integration?
How would one share common test code between the two assemblies? Should i create a thrid assembly which contains all the Abstract Test Classes and let the unit and integration inherit? I am looking for maximum re-usability...
I hear alot of noise about Dependency Injection (StructureMap), How could one utilise such a tool in the given Unit + Integration test setup?
can anyone share some wisdom? Thanks
I don't think you should physically separate the two. A good solution is to put the Microsoft.TeamFoundation.PowerTools.Tasks.CategoryAttribute above your tests to identify regular and integration tests. When running tests (even with MSBuild) you can decide to run only the tests you're interested in.
Alternatively you can put them in seperate namespaces.
For code that will be executed in setup & teardown phases, the base class approach would work well. For integration tests, you can extract the functionality of your unit tests into well-parameterized non-test methods (preferably placed in another namespace) and call these "common" methods from both unit and integration tests. Putting unit tests, integration tests and common methods into separate namespaces would suffice, there would be no need for extra assemblies.
One approach would be to create a separate file with helper methods that would be used across multiple testing contexts, and then include that file in both your unit tests and your functional tests. For the parts that vary, you could use dependency injection - for example, by passing in different factories. In the unit tests, the factory could build a fake object, and in the functional tests it could insert a real object in to your test database.
Whether you split the tests into two projects or keep them in one might depend on the number of classes/tests you have. Too many classes in a single project would make it difficult to dig through. If you do split them out, helper/common methods could be thrown into a third assembly, or you could make them public in the unit test assembly, and let the integration assembly reference that one. Make things only as complex as you have to.
On our project we have both integration and unit tests together but in separate folders. Our project layout is such that we have separate assemblies for the main sections (Domain, Services, etc). Each assembly has a matching test assembly. Test assemblies are allowed to reference other test assemblies.
This means Services.Test can reference Domain.Test which makes sense to us because Services references Domain in the actual code.
In terms of reusable pieces we have
Builders - These provide a fluent interface for creating the most important/complex objects in our domain. These live in the main test folder for our domain. Our domain test assembly is referenced by all other test assemblies.
Mothers - These insert data into the database. They give back an Id for the inserted row which can be used to load the object if required. These live in the main test folder for our services.
Helpers - These are guys that do small things throughout our testing. For instance we prefer to allow access to collections via IEnuermable so we have a CollectionHelper.AssertCountIsEqualTo<_T>(int count, IEnumerable<_T> collection, string message) which wraps the IEnumerable in a List and asserts a count. Our Helpers all live in a common test which every other test references.
As for an IoC container if you can use one on your project they can be a huge help not only in testing (via auto mocking) but also in general development. With the overheard of registering everything with the contain though it might be a bit much for just testing.
After some experimenting this is how you can re-use test methods:
public abstract class TestBase
{
[TestMethod]
public void BaseTestMethod()
{
Assert.IsTrue(true);
}
}
[TestClass]
public class UnitTest : TestBase
{
}
[TestClass]
public class IntegrationTest : TestBase
{
}
The unit and integration test class will pick up the base class test methods and run them as two separate tests.
You should be able to create a parametised constructor on the base class to inject your mocks or resources.
I think this method can olny be used with classed within the same assembly. So it looks like the single assembly approach will have to do for now.
Thanks for the tips ppl!
If the only difference between many of your unit tests and the corresponding integration tests is that the latter use "real" ressources rather than fakes (mocks), one approach is the following:
Make a flag is_unit_test available to your test class from the outside
In the class setup, make fake or real resources available depending on the flag. For instance if you need to use a DB API that is either real (an instance of class DBreal) or fake (an instance of class DBfake), your initialization may look like if is_unit_test then this.dbapi = new DBfake else this.dbapi = new DBreal. (DBreal and DBfake need to conform to the same interface, let's call it DBapi.)
From the point of view of your test methods, step 2 amounts to (manual) Dependency Injection: The method does not know what class actually implements its dependency (the DB API). Rather, the dependency is injected into the method from the outside.
Where your test cases require the DB API, they use this.dbapi
Now you execute one and the same test class with the flag set for unit testing and without the flag set for integration testing. (How to make the flag available depends on your unit testing framework.)
Obviously, the same approach can be used if you need more than one resource in a test class.
Some people will find the explicit if in step 2 ugly. To make it more "elegant", you could employ an Inversion of Control (IoC) container (in Java for instance Spring or PicoContainer) to semi-automate the Dependency Injection instead. The initialization would then look like this.dbapi = myContainer.create(DBapi).
In simple cases, an IoC container will only complicate things, because configuring the container is not trivial, involves learning, opens the possibility of a new class of mistakes, and involves additional files.
In more complex cases however, the container makes things easier, because if the creation of your resources requires still other resources, the container will take care of their initialization as well and complexity would indeed go down. But unless you really get there, I suggest to KISS.
Unless you have an important reason for separate assemblies, they violate KISS. I suggest to wait for that reason first.
(Note that some people may tell you that Dependency Injection is only done at the class level.
I consider this unwarranted dogmatism. Injection simply means that a caller does not know the exact class it is calling, no matter how it obtained the object. It often becomes more useful when applied at the class level, but depending on your test framework this may make things overly complicated in the above case. Note that some test frameworks have their own injection capabilities, though.)
I would like to know what are your practices when you test your classes.
For example, I like to use inheritance, with my fixtures.
Given two classes BaseClass, SubClass, I make two other classes BaseClassFixture and SubClassFixture (SubClassFixture is a sub class of BaseClassFixture). So I'm sure that I don't break code which use SubClass as a BaseClass (And people who extends my class can be sure if they do things right, by creating another sub class of my fixture).
I do fixture inheritance with interfaces too.
For example, when I create a fixture for IList, I check that any Add, increase Count by one.
When I have a concrete class which implements IList I just create a fixture named MyConcreteClassIListFixture.
In that case, the fixture for my interface is abstract and I let my subclass create the instance for my tests.
I think it's a kind of design by contracts (see Bertrand Meyer), because I check invariant before and after any tests.
I do this especially with published interfaces or classes.
And you... what are your practices ??
My most important rule is that each test should be atomic and should run in no particular order.
For unit tests, they should strictly obey seperation of concerns.
For integration tests, I pay extra attention to make sure they follow the most important rule.
Also, tests should follow the DRY rule as much as possible along with the code.
There are couple of important things when writing unit test.
1) Unit Tests should be independent:
Unit Tests must be independent. This means that your unit test should not depend on external things to run. This includes internet connection, external web services etc.
2) Unit Tests should be fast:
Unit Tests should run fast. You can write unit tests in multiple ways. Some of them include the dataaccess even though you don't need to access data to run the test. You can always use mock objects and mock the data access layer.
3) Good naming convention:
Unit Tests should have good naming convention and should read like stories.
Here is one example:
public class when_user_transfer_money_from_source_account_to_destination_account
public void make_sure_error_is_thrown_when_source_account_has_insufficient_funds()
{
}
Here is a good screencast that covers many of the above points:
http://screencastaday.com/ScreenCasts/32_Introduction_to_Mocking.aspx