I've created a unit test that tests interactions on my ViewModel class in a Silverlight application. To be able to do this test, I'm mocking the service interface, injected to the ViewModel. I'm using Moq framework to do the mocking.
to be able to verify bounded object in the ViewModel is converted properly, I've used a callback:
[Test]
public void SaveProposal_Will_Map_Proposal_To_WebService_Parameter()
{
var vm = CreateNewCampaignViewModel();
var proposal = CreateNewProposal(1, "New Proposal");
Services.Setup(x => x.SaveProposalAsync(It.IsAny<saveProposalParam>())).Callback((saveProposalParam p) =>
{
Assert.That(p.plainProposal, Is.Not.Null);
Assert.That(p.plainProposal.POrderItem.orderItemId, Is.EqualTo(1));
Assert.That(p.plainProposal.POrderItem.orderName, Is.EqualTo("New Proposal"));
});
proposal.State = ObjectStates.Added;
vm.CurrentProposal = proposal;
vm.Save();
}
It is working fine, but if you've noticed, using this mechanism the Assert and Act part of the unit test have switched their parts (Assert comes before Acting). Is there a better way to do this, while preserving correct AAA order?
I'm not sure that you've changed the semantics of the AAA order. Consider the execution of the test. Your mocked interface will not be called until the Action invokes it. Therefore, during execution, your program still follows the Arrange, Act, and Assert flow.
The alternative would be to use Data Injection and create an interface between your CampaignViewModel and the web service that it uses. You can then create a class in your UnitTests that saves your parameter information and Assert on that class member/property rather than use Moq to create a proxy on the fly.
Moq should not be used to simulate storage or assignment. Rather, use Moq to provide dummy mechanisms and values to allow your Unit Tests to execute. If Asserting storage is a requirement, then take the time to create a class that will hold on to your values.
Related
I am writing Unit Tests for My Web API application ,created with .net Core. I use MSTest with MOQ for mocking.
It has many layers as below.
Controller => Manager classes => Facade classes => Contract class
I am providing entity in controller that I have mocked.
[TestMethod]
public void InvoiceUpdateFinanceTest_Success()
{
var mock = mockFactory.OutstandingInvoicesDetailsMock();
dataController.InvoiceUpdateFinance(mock);
Assert.IsTrue(true);
}
Contract Class is creating DB connection instance and does database calls
Code sample in Contract Layer is as below
public class GradeWiseSlab : IMaintainContract
{
IDbConnection _db = new DBModel().InstanceCreation();
DynamicParameters dp = new DynamicParameters();
public MaintainContractEntities MaintainContractEntities { get; set; }
public void SaveData()
{
dp.Add("#N_VENDOR_ID", MaintainContractEntities.VendorId);
dp.Add("#N_CONTRACT_SUB_TYPE_ID", MaintainContractEntities.SubContractType);
dp.Add("#N_PAYMENT_TERM", MaintainContractEntities.PaymentTerm);
dp.Add("#S_TRANSACTION_CURRENCY", MaintainContractEntities.TransactionCurrency);
dp.Add("#S_DOC_NAME", MaintainContractEntities.DocNames);
dp.Add("#S_APPROVAL_STATUS", MaintainContractEntities.ApproalStatus);
dp.Add("#D_FROM_DATE", MaintainContractEntities.FromDate);
dp.Add("#D_TO_DATE", MaintainContractEntities.ToDate);
this._db.Query(Constant.SP_GRADE_WISE_SLAB, dp, commandTimeout: 0, commandType: CommandType.StoredProcedure);
}
How can I write test cases which shall cover all the layers i.e from controller to contract but no actual DB to touch?
I am not using Entity Framework, I am using Dapper library.
if you were using EntityFramework, EF now has a InMemoryDatabase.
But that does not seem to be the case.
I have to rewrite some of your "terms" since you are not showing alot of code and the terms are a little confusing.
Controller (WebApiLayer)=> Manager classes (BusinessLogic Layer) => DataLayer (this is where the calls to stored procedures actually exist). From your description, I would say my "DataLayer" is your "Contract" layer......imho, "Contract" is an confusing and ambiguous name....and not a great name for a datalayer. But I digress.
With the above design, you would INJECT some IDataLayer object (for example, IEmployeeDataLayer) interface into your Manager (EmployeeManager : IEmployeeManager). You would have a "real" concrete EmployeeTheRealOneDataLayer object ... and you would also be able to mock IEmployeeDomainDataLayer .. in order to unit test your EmployeeManager object.
I actually have a "close" sample here:
https://github.com/granadacoder/dotnet-core-on-linux-one/blob/master/src/Bal/Managers/EmployeeManager.cs
public class EmployeeManager : IEmployeeManager
{
private readonly ILogger<EmployeeManager> logger;
private readonly IEmployeeDataLayer iedl;
public EmployeeManager(ILoggerFactory loggerFactory, IEmployeeDataLayer iedl)
{
this.logger = loggerFactory.CreateLogger<EmployeeManager>();
this.iedl = iedl;
}
With my unit test code, I would Mock IEmployeeDataLayer , to be able to test the EmployeeManager class.
You question is..how do I test the "real" EmployeeDataLayer without hitting a real database.
This is the i-dont-think-it-exists breaking point. All of that code in the datalayer is coded to (really really) hit a database.
So this is where holy wars start. Some will say "wire it to a sqlexpress database"......but that IMHO, is when these stop becoming unit tests (not running only in memory)..but become more functional tests.
Since you are not using EF, you probably have to make a decision. Be "ok" with writing unit-tests against your business logic layer...and "skip" the DataLayer. In fact, I will put the below attribute on my data-layer classes.
https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.codeanalysis.excludefromcodecoverageattribute?view=netcore-3.1
Your other choice is to move to something that is more like this : (EF in memory)
https://learn.microsoft.com/en-us/ef/core/providers/in-memory/?tabs=dotnet-core-cli
Personally, (and I actually do this when not using EF) (and I also use Dapper because Dapper performs well).......I skip UNIT testing the datalayer. (Emphasis on the "unit" testing)
But our QA will write FUNCTIONAL tests against a real database engine (think SqlSExpress). The caveat here is that they have to be able to (we pick once a week) completely DELETE/FLUSH the "qa database"....and rebuild it from seed data. This prevents people writing functional tests...looking for hard coded surrogate-(primary-keys) on the tables....but forces looking for items by unique-constraints instead. Aka, it prevents getting "too married" to a specific data inside the qa functional testing database).
(Note, this is a similar concept with the EF In Memory Database...because when you start unit tests with an EF In Memory db, it essentially "is empty" at startup).
PS
You can see a more EF oriented project setup here:
https://github.com/granadacoder/oracle-ef-issues-demo/tree/master/src
PPS
I'm not saying the ideas above are an exhaustive list of possibilities. But when other ideas start crossing the line of "just being in memory"....that's when these mini holy wars start. I am very firm in our company about the "in memory" line in the sand. We used to have alot of "MyProject.Tests.csproj", and this could refer to unit-tests or functional-tests. Now I "strongly encourage" the projects be called "MyProject.UnitTests.csproj" OR "MyProject.FunctionalTests.csproj" to disambiguate. Not everybody loves my "draw line in the sand" approach.
I know this is a basic question but due to lack of my knowledge I couldn't start writing test for my application. So here I am trying to learn and understand what to test and how to test based on my application scenario.
Class MyController {
MyService service = new MyService();
List<String> myList = service.generateSomeList(keyVal);
model.add("myList", myList);
}
Class MyService {
ThirdPatryService extService = new ThirdPatryService ();
MyDao dao = new MyDao();
public List<String> generateSomeList(Long key) {
List<String> resultList = dao.fetchMyList(key);
extService.processData(resultList); //using 3rd party service to process result. It doesn't return anything.
return formattedList(resultList);
}
private List<String> formattedList(List<String> listToProcess) {
//do some formatting
}
}
Class MyDao {
List<String> fetchMyList(key) {
//Use DB connection to run sql and return result
}
}
I want to do both unit testing and integration testing. So some of my questions are:
Do I have to do unit testing for MyDao? I don't think so since I can test query result by testing service level.
What can be the possible test cases for service level? I can think of test result from db and test formatting function. Any other test that I missed?
While testing generateSomeList() method is that OK to create dummy String list and test it against result? Like code below Am I creating list myself and testing myself. IS this proper/correct way to write test?
#Test
public void generateSomeListTest() {
//Step 1: Create dummy string list e.g. dummyList =["test1", "test2", "test3"]
//Step 2: when(mydao.fetchMyList(anyLong()).thenReturn(dummyList);
//Step 4: result=service.generateSomelist(123L);
//Step 5: assertEquals(result[i], dummyList[i]);
}
I don't think I have to test third party service but I think I have to make sure it is being called. Is this correct? If yes how can I do that with Mockito?
How to make sure thirdparty service has really done the processing of my data. Since its return type is void how can I do test it really done its job e.g like send email
Do I have to write test for controller or I can just write integration test.
I really appreciate if you could answer these question to understand the testing part for the application.
Thanks.
They're not really unit tests, but yes, you should test your DAOs. One of the main points in using DAOs is precisely that they're relatively easy to test (you store some test data in the database, then call a DAO method which executes a query, and check that the method returns what it should return), and that they make the service layer easy to test by mocking the DAOs. You should definitely not use the real DAOs when testing the services. Ue mock DAOs. That willmake the service tests much simpler, and much much faster.
Testing the results from DB is the job of the DAO test. The service test should use a mock DAO that returns hard-coded results, and checks that the service foes what it should do with these hard-coded results (formatting, in this case)
Yes, it's fine.
Usually, it's sufficient to stub the dependencies. Verifying that they have been called is often redundant. In that case, it could be a good idea since the third party service doesn't return anything. But that's a code smell. Why doesn't it return something? See the method verify() in Mockito. It's the very first point in the Mockito documentation: http://docs.mockito.googlecode.com/hg/latest/org/mockito/Mockito.html#1
The third party service is supposed to have been tested separately and thus be reliable. So you're supposed to assume that it does what its documentation says it does. The test for A which uses B is not supposed to test B. Only A. The test of B tests B.
A unit test is usually simpler to write and faster to execute. It's also easier to test corner cases with unit tests. Integration tests should be more coarse-grained.
Just a note: what your code severely lacks as is is dependency injection. That's what will make your code testable. It's very hard to test as is because the controller creates its service, which creates its DAO. Instead, the controller should be injected with the service, and the service should be injected with the DAO. That's what allows injecting a mock DAO in the service to test the service in isolation, and to inject a mock service into the controller to test the controller in isolation.
Using Ninject, I have the following and wish to test using FluentAssertions:
[Test]
public void InterfacesEndingWithFactoryShouldBeBoundAsFactories() {
// Given
IKernel kernel = new StandardKernel();
kernel.Bind(services => services
.From(AppDomain.CurrentDomain
.GetAssemblies()
.Where(a => !a.FullName.Contains("Tests")))
.SelectAllInterfaces()
.EndingWith("Factory")
.BindToFactory()
);
// When
var factory = kernel.Get<ICustomerManagementPresenterFactory>();
// Then
factory.Should().NotBeNull();
}
Is there any good ways to test whether the factories are actually bound properly?
I wrote an extension package for Fluent Assertions to test my Ninject bindings. Using it your test could be rewritten like this:
[Test]
public void InterfacesEndingWithFactoryShouldBeBoundAsFactories() {
// Given
IKernel kernel = new StandardKernel();
kernel.Bind(services => services
.From(AppDomain.CurrentDomain
.GetAssemblies()
.Where(a => !a.FullName.Contains("Tests")))
.SelectAllInterfaces()
.EndingWith("Factory")
.BindToFactory()
);
// When
// Then
kernel.Should().Resolve<ICustomerManagementPresenterFactory>().WithSingleInstance()
}
As suggested by #Will Marcouiller, I would also extract the code to bootstrap the kernel into it's own class so it can be invoked in your app's composition root and in your unit tests.
First off, there is no ninject infrastructure to test which types are affected and which not. Unit testing is very complicated due to the fluent syntax and it's many interfaces which are returned when calling a method. So all you can practically do is integration testing.
Now, in my opinion, the From(...) is problematic. If you'd create a component where you could replace the assemblies you pass to From , create an extra test assembly containing a few types, and then test whether lets say interface IFoo is not bound, interface IFooFactory is bound, class Foo is not bound,.. you would have a functioning integration test.
Consider however, that if there is no binding for IFooFactory and SomeClass uses IFooFactory as constructor argument, ninject will throw an exception. Now if you have a composition root simply starting up the application will tell you whether the necessary bindings exist or not.
This test is even more useful than the factory convention integration test. Consider if someone accidentally bound the factory, manually, too. This won't show up with the convention-integration test, but starting the application, ninject will throw an exception stating that there are multiple bindings for this one interface.
I have this simple method which calls the TFS (Team foundation server) API to get WorkItemCollection object. I have just converted in to an entity class and also added it in cache. As you can see this is very simple.
How should i unit test this method. Only the important bit it does is calls TFS API. Is it worth testing such methods? If yes then how should we test it?
One way I can think is I can mock call to Query.QueryWorkItemStore(query) and return an object of type “WorkItemCollection” and see finally this method converts “WorkItemCollection” to List. And check if it was added to cache or not.
Also as I am using dependency injection pattern her so I am injecting dependency for
cache
Query
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
public virtual List<Sprint> Sprint(string query)
{
List<Sprint> sprints =
Cache.Get<List<Sprint>>(query);
if (sprints == null)
{
WorkItemCollection items =
Query.QueryWorkItemStore(query);
sprints = new List<Sprint>();
foreach (WorkItem i in items)
{
Sprint sprint = new Sprint
{
ID = i.Id,
IterationPath = i.IterationPath,
AreaPath = i.AreaPath,
Title = i.Title,
State = i.State,
Goal = i.Description,
};
sprints.Add(sprint);
}
Cache.Add(sprints, query,
this.CacheExpiryInterval);
}
return sprints;
}
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
In your unit tests, you should pass a mock. There are several reasons:
A mock is transparent: it allows you to check that the code under test did the right thing with the mock.
A mock gives you full control, allowing you to test scenarios that are difficult or impossible to create with the real server (e.g. throw IOException)
A mock is predictable. A real server is not - it may not even be available when you run your tests.
Things you do on a mock don't influence the outside world. You don't want to change data or crash the server by running your tests.
A test with mocks is faster. No connection to the server or real database queries have to be made.
That being said, automated integration tests which include a real server are also very useful. You just have to keep in mind that they will have lower code coverage, will be more fragile, and will be more expensive to create/run/maintain. Keep your unit tests and your integration tests separate.
edit: some collaborator objects like your Cache object may also be very unit-test friendly. If they have the same advantages as that of a mock that I list above, then you don't need to create a mock. For example, you typically don't need to mock a collection.
This is a tough one because not too many people use Pex & Moles or so I think (even though Pex is a really great product - much better than any other unit testing tool)
I have a Data project that has a very simple model with just one entity (DBItem). I've also written a DBRepository within this project, that manipulates this EF model. Repository has a method called GetItems() that returns a list of business layer items (BLItem) and looks similar to this (simplified example):
public IList<BLItem> GetItems()
{
using (var ctx = new EFContext("name=MyWebConfigConnectionName"))
{
DateTime limit = DateTime.Today.AddDays(-10);
IList<DBItem> result = ctx.Items.Where(i => i.Changed > limit).ToList();
return result.ConvertAll(i => i.ToBusinessObject());
}
}
So now I'd like to create some unit tests for this particular method. I'm using Pex & Moles. I created my moles and stubs for my EF object context.
I would like to write parametrised unit test (I know I've first written my production code, but I had to, since I'm testing Pex & Moles) that tests that this method returns valid list of items.
This is my test class:
[PexClass]
public class RepoTest
{
[PexMethod]
public void GetItemsTest(ObjectSet<DBItem> items)
{
MEFContext.ConstructorString = (#this, name) => {
var mole = new SEFContext();
};
DBRepository repo = new DBRepository();
IList<BLItem> result = repo.GetItems();
IList<DBItem> manual = items.Where(i => i.Changed > DateTime.Today.AddDays(-10));
if (result.Count != manual.Count)
{
throw new Exception();
}
}
}
Then I run Pex Explorations for this particular parametrised unit test, but I get an error path bounds exceeded. Pex starts this test by providing null to this test method (so items = null). This is the code, that Pex is running:
[Test]
[PexGeneratedBy(typeof(RepoTest))]
[Ignore("the test state was: path bounds exceeded")]
public void DBRepository_GetTasks22301()
{
this.GetItemsTest((ObjectSet<DBItem>)null);
}
This was additional comment provided by Pex:
The test case ran too long for these inputs, and Pex stopped the analysis. Please notice: The method Oblivious.Data.Test.Repositories.TaskRepositoryTest.b__0 was called 50 times; please check that the code is not stuck in an infinite loop or recursion. Otherwise, click on 'Set MaxStack=200', and run Pex again.
Update attribute [PexMethod(MaxStack = 200)]
Question
Am I doing this the correct way or not? Should I use EFContext stub instead? Do I have to add additional attributes to test method so Moles host will be running (I'm not sure it does now). I'm running just Pex & Moles. No VS test or nUnit or anything else.
I guess I should probably set some limit to Pex how many items should it provide for this particular test method.
Moles is not designed to test the parts of your application that have external dependencies (e.g. file access, network access, database access, etc). Instead, Moles allows you to mock these parts of your app so that way you can do true unit testing on the parts that don't have external dependencies.
So I think you should just mock your EF objects and queries, e.g., by creating in-memory lists and having query methods return fake data from those lists based on whatever criteria is relevant.
I am just getting to grips with pex also ... my issues surrounded me wanting to use it with moq ;)
anyway ...
I have some methods similar to your that have the same problem. When i increased the max they went away. Presumably pex was satisfied that it had sufficiently explored the branches. I have methods where i have had to increase the timeout on the code contract validation also.
One thing that you should probably be doign though is passing in all the dependant objects as parameters ... ie dont instantiate the repo in the method but pass it in.
A general problem you have is that you are instantiating big objects in your method. I do the same in my DAL classes, but then i am not tryign to unit test them in isolation. I build up datasets and use this to test my data access code against.
I use pex on my business logic and objects.
If i were to try and test my DAL code id have to use IOC to pass the datacontext into the methods - which would then make testing possible as you can mock the data context.
You should use Entity Framework Repository Pattern: http://www.codeproject.com/KB/database/ImplRepositoryPatternEF.aspx