I'm trying out MVVM Light, partly inspired by the EventToCommand capabilities which seem to make it easier to handle drag-and-drop from outside my app in the View Model and in the XAML. However I am a confused by how to unit test the RelayCommand. My RelayCommand is declared simply
public RelayCommand<DragEventArgs> DropFile { get; private set; }
and then the functionality is assigned within the ViewModel constructor, not inline but using a method on the ViewModel
this.DropFile = new RelayCommand<DragEventArgs>(dropFileHandler);
When I'm writing a unit test for the DropFile RelayCommand I cannot see what to call? Should I be calling
testTarget.DropFile.Execute(params)
and how does one construct the params since DragEventArgs has only an empty constructor, and its key properties are just getters not setters?
This is true for standard commands as well as the MVVM-Light specific relay commands.
logic that needs to be unit testable should be implemented in the viewmodel as a method and then called from the command.
whats left in the command should be logic to extract information from the UI, i.e. converting the parameter to the appropriate type and passing it on.
This way the viewmodel as an entity is unit testable, the commands are kept very thin, everyones happy =].
N.B. If you want to be particularly strict with your unit testing, the conversion should happen in the Method of the ViewModel, but typically so long as it can handle a null parameter then your all set which is why I grow lazy.
Hope that helps
Related
How do I write a mock test that allows me to validate that an inaccessible property (debugLog) is set to true? Do I try to find a way to find the value of the property? Do I verify that console.debug is set? Does a spy make sense in this situation or should I use a stub?
Class X
let showDebugLogs = false,
debugLog = _.noop
/**
* Configures Class X instances to output or not output debug logs.
* #param {Boolean} state The state.
*/
exports.showDebugLogs = function (state) {
showDebugLogs = state;
debugLog = showDebugLogs ? console.debug || console.log : _.noop;
};
Unit Test
describe('showDebugLogs(state)', function () {
let spy;
it('should configure RealtimeEvents instances to output or not output debug logs', function () {
spy = sinon.spy(X, 'debugLog');
X.showDebugLogs(true);
assert.strictEqual(spy.calledOnce, true, 'Debug logging was not enabled as expected.');
spy.restore();
});
});
Mock testing is used for "isoloting" a class under test from its environment to decrease its side effects and to increase its test-ability. For example, if you are testing a class which makes AJAX calls to a web server, you'd probably do not want to:
1) wait for AJAX calls to complete (waste of time)
2) observe your tests fall apart because of possible networking problems
3) cause data modifications on the server side
and so on.
So what you do is to "MOCK" the part of your code, which makes the AJAX call, and depending on your test you either:
1) return success and response accompanying a successful request
2) return an error and report the nature of the point of failure to see how your code is handing it.
For your case, what you need is just a simple unit test case. You can use introspection techniques to assert internal states of your object, if this is what you really want to. However, this comes with a warning: it is discouraged. Please see Notes at the bottom
Unit testing should be done to test behavior or public state of an object. So, you should really NOT care about internals of a class.
Therefore, I suggest you to re-consider what you are trying to test and find a better way of testing it.
Suggestion: Instead of checking a flag in your class, you can mock up logger for your test. And write at least two test cases as follows:
1) When showDebugLogs = true, make sure log statement of your mock logger is fired
2) When showDebuLogs = false, log statement of your mock logger is not called.
Notes: There has been a long debate between two schools of people: a group advocating that private members/methods are implementation details and should NOT be tested directly, and another group which opposes this idea:
Excerpt from a wikipedia article:
There is some debate among practitioners of TDD, documented in their
blogs and other writings, as to whether it is wise to test private
methods and data anyway. Some argue that private members are a mere
implementation detail that may change, and should be allowed to do so
without breaking numbers of tests. Thus it should be sufficient to
test any class through its public interface or through its subclass
interface, which some languages call the "protected" interface.[29]
Others say that crucial aspects of functionality may be implemented in
private methods and testing them directly offers advantage of smaller
and more direct unit tests
I have seen lots of posts (and debates!) about which way round UnitOfWork and Repository. One of the repository patterns I favor is the typed generic repository pattern, but I fear this had lead to some issues with clean code and testability. Take the following repository interface and generic class:
public interface IDataEntityRepository<T> : IDisposable where T : IDataEntity
{
// CRUD
int Create(T createObject);
// etc.
}
public class DataEntityRepository<T> : IDataEntityRepository<T> where T : class, IDataEntity
{
private IDbContext Context { get; set; }
public DataEntityRepository (IDbContext context)
{
Context = context;
}
private IDbSet<T> DbSet { get { return Context.Set<T>(); } }
public int Create(T CreateObject)
{
DbSet.Add(createObject);
}
// etc.
}
// where
public interface IDbContext
{
IDbSet<T> Set<T>() where T : class;
DbEntityEntry<T> Entry<T>(T readObject) where T : class;
int SaveChanges();
void Dispose();
}
So basically I am using the Context property in each pattern to gain access to the underlying context.
My problem is now this: when I create my unit of work, it will effectively be a wrapper of the context I need the repository to know about. So, if I have a Unit Of Work that declares the following:
public UserUnitOfWork(
IDataEntityRepository<User> userRepository,
IDataEntityRepository<Role> roleRepository)
{
_userRepository = userRepository;
_roleRepository = roleRepository;
}
private readonly IDataEntityRepository<User> _userRepository;
public IDataEntityRepository<User> UserRepository
{
get { return _userRepository; }
}
private readonly IDataEntityRepository<Role> _roleRepository;
public IDataEntityRepository<Role> RoleRepository
{
get { return _roleRepository; }
}
I have a problem with the fact that the two repositories I am passing in both need to be instantiated with the very Unit Of Work into which they are being passed. Obviously I could instantiate the repositories inside the constructor and pass in the "this" but that tightly couples my unit of work to a particular concrete instance of the repositories and makes unit testing that much harder.
I would be interested to know if anyone else has headed down this path and hit the same wall. Both these patterns are new to me so I could well be doing something fundamentally wrong. Any ideas would be much appreciated!
UPDATE (response to #MikeSW)
Hi Mike, many thanks for your input. I am working with EF Code First but I wanted to abstract certain elements so I could switch to a different data source or ORM if required and because I am (trying!) to push myself down a TDD route and using Mocking and IOC. I think I have realised the hard way that certain elements cannot be unit tested in a pure sense but can have integration tests! I'd like to raise your point about Repositories working with business objects or viewmodels etc. Perhaps I have misunderstood but if I have what I see as my core business objects (POCOs), and I then want to use an ORM such as EF code first to wrap around those entities in order to create, and then interact with, the database (and, it's possible, I may re-use these entities within a ViewModel), I would expect a Repository to handle these entities directly in the context of some set of CRUD operations. The entities know nothing about the persistence layer at all, neither would any ViewModel. My unit of work simply instantiates and holds the required repositories allowing a transaction commit to be performed (across multiple repositories but the same context/ session). What I have done in my solution is to remove the injection of an IDataEntityRepository ... etc. from the UnitOfWork constructor as this is a concrete class that must know about one and only one type of IDataEntityRepository it should be creating (in this case DataEntityRepository, which really should be bettered names as EFDataEntityRepository). I cannot unit test this per se because the whole unit logic would be to establish the repositories with a context (itself) to some database. It simply needs an integration test. Hope that makes sense?!
To avoid dependency on each repository in your Unit of Work, you could use a provider based on this contract:
public interface IRepositoryProvider
{
DbContext DbContext { get; set; }
IRepository<T> GetRepositoryForEntityType<T>() where T : class;
T GetRepository<T>(Func<DbContext, object> factory = null) where T : class;
void SetRepository<T>(T repository);
}
then you could inject it into your UoW that would look like this:
public class UserUnitOfWork: IUserUnitOfWork
{
public UserUnitOfWork(IRepositoryProvider repositoryProvider)
{
RepositoryProvider = repositoryProvider;
}
protected IDataEntityRepository<T> GetRepo<T>() where T : class
{
return RepositoryProvider.GetRepositoryForEntityType<T>();
}
public IDataEntityRepository<User> Users { get { return GetRepo<User>(); } }
public IDataEntityRepository<Role> Roles { get { return GetRepo<Role>(); } }
...
Apologies for the tardiness of my response - I have been trying out various approaches to this in the mean time. I have marked up the answers above because I agree with the comments made.
This is one of those questions where there is more than one answer and it's very much dependent upon the overall approach. Whilst I agree that EF effectively provides a ready-made unit of work pattern, my decision to create my own unit of work and repository layers was to be able to control access to the database entities.
Where I struggled was in the need to be able to inject a repository into a unit of work. What I realised though was that in the case of EF, my unit of work was effectively a thin wrapper around multiple repositories with a Commit (SaveChanges) method. It was not responsible for executing specific actions such as FindCustomer etc.
So I decided that a unit of work could be tightly coupled to its specific type of DataRepository pattern. To ensure I had a testable pattern, I introduced a service layer that provided the facade for executing particular actions such as CreateCustomer, FindCustomers etc. These services that accepted an IUnitOfWork constructor parameter which provided access to the repositories (as interfaces) as well as the Commit method.
I was then able to create fakes of both unit of work and/ or repositories for testing purposes. This just left me with the decision of what could be unit tested with fakes and what needed to be integration tested with the concrete instances.
And this also gives me the opportunity to control what actions are performed on the database and how they are performed.
I'm sure there are many ways to skin this particular cat but the goals of provided a clean interface that is testable have been just about met with this approach.
My thanks to g1ga and Mike for their input.
When using Entity Framework (EF) (which I assume you're using) you already have a generic repository IDbSet. It's useless to ad another layer on top just to call EF methods.
Also, a repository works with application objects (usually business objects, but they can be view models or objects state). If you're just using db entities, you kinda defeat the Repository pattern purpose ( to isolate the business bojects from the database). THe original pattern deals only with busines objects, but it is a useful pattern outside the business layer too.
The point is that EF entities are Persistence objects and have (or should have) no relation to your business objects. You want to use the repository pattern to 'translate' the busines objects to persistence objects and viceversa.
Sometimes it might happen that an application object (like a viewmodel) to be the same with a persistence entity (and in that case you can use directly EF objects) but that's a coincidence.
About Unit of Work (UoW), let's say that's tricky. Personally, I prefer to use the DDD (domain driven design) approach and consider that any business object (BO) sent to the repoistory is a UoW, so it will be wrapped in a transaction.
If I need to update multiple BOs, I'll use a message driven architecture to send commands to the relevant BOs. Of course, that's more complicated and requires to be at ease with the concept of eventual consistency but I'm not depending on a specific RDBMS.
If you know that you'll be using a specific RDBMS and that will never be changed, you could start a transaction and pass the associated connection to each repository, with a commit at the end (that will be the UoW). If you're in a web setting, it's even easier, start transaction when the request begins, commit when requests ends (you can use an ActionFilter for ASp.Net Mvc).
However this solution is tied up to one RDBMS, so it won't apply to a NoSql or any storage which doesn't support transactions. For those cases, the message driven way is the best.
Sorry for the long post...
While being introduced to a brown field project, I'm having doubts regarding certain sets of unit tests and what to think. Say you had a repostory class, wrapping a stored procedure and in the developer guide book, a certain set guidelines (rules), describe how this class should be constructured. The class could look like the following:
public class PersonRepository
{
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (new SomeProfiler("someKey"))
{
var sp = Ioc.Resolve<IPersonStoredProcedure>();
sp.addNameArguement(personName);
sp.addCityArguement(cityName);
return sp.invoke();
}
} }
Now, I would of course write some integration tests, testing that the SP can be invoked, and that the behavior is as expected. However, would I write unit tests that assert that:
Constructor for SomeProfiler with the input parameter "someKey" is called
The Constructor of PersonStoredProcedure is called
The addNameArgument method on the stored procedure is called with parameter personName
The addCityArgument method on the stored procedure is called with parameter cityName
The invoke method is called on the stored procedure -
If so, I would potentially be testing the whole structure of a method, besides the behavior. My initial thought is that it is overkill. However, in regards to the coding practices enforced by the team, these test ensure a uniform and 'correct' structure and that the next layer is called correctly (from DAL to DB, BLL to DAL etc).
In my case these type of tests, are performed for each layer of the application.
Follow up question - the use of the SomeProfiler class smells a little like a convention to me - Instead creating explicit tests for this, could one create convention styled test by using static code analysis or unittest + reflection?
Thanks in advance.
I think that your initial thought was right - this is an overkill. Although you can use reflection to make sure that the class has the methods you expect I'm not sure you want to test it that way.
Perhaps instead of unit testing you should use some tool such as FxCop/StyleCop or nDepend to make sure all of the classes in a specific assembly/dll has these properties.
Having said that I'm a believer of "only code what you need" why test that a method exist, either you use it somewhere in your code and in that can you can test the specific case or you don't - and so it's irrelevant.
Unit tests should focus on behavior, not implementation. So writing a test to verify that certain arguments are set or passed in doesn't add much value to your testing strategy.
As the example provided appears to be communicating with your database, it can't truly be considered a "unit test" as it must communicate with physical dependencies that have additional setup and preconditions, such as availability of the environment, database schema, existing data, stored-procedures, etc. Any test you write is actually verifying these preconditions as well.
In it's present condition, your best bet for these types of tests is to test the behavior provided by the class -- invoke a method on your repository and then validate that the results are what you expected. However, you'll suddenly realize that there's a hidden cost here -- the database maintains state between test runs, and you'll need additional setup or tear-down logic to ensure that the database is in a well-known state.
While I realize the intent of the question was about the testing a "black box", it seems obvious that there's some hidden magic here in your API. My preference to solve the well-known state problem is to use an in-memory database that is scoped to the current test, which isolates me from environment considerations and enables me to parallelize my integration tests. I'd wager that under the current design, there is no "seam" to programmatically introduce a database configuration so you're "hemmed in". In my experience, magic hurts.
However, a slight change to the existing design solves this problem and the "magic" goes away:
public class PersonRepository : IPersonRepository
{
private ConnectionManager _mgr;
public PersonRepository(ConnectionManager mgr)
{
_mgr = mgr;
}
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (var p = _mgr.CreateProfiler("somekey"))
{
var sp = new PersonStoredProcedure(p);
sp.addArguement("name", personName);
sp.addArguement("city", cityName);
return sp.invoke();
}
}
}
I've created a unit test that tests interactions on my ViewModel class in a Silverlight application. To be able to do this test, I'm mocking the service interface, injected to the ViewModel. I'm using Moq framework to do the mocking.
to be able to verify bounded object in the ViewModel is converted properly, I've used a callback:
[Test]
public void SaveProposal_Will_Map_Proposal_To_WebService_Parameter()
{
var vm = CreateNewCampaignViewModel();
var proposal = CreateNewProposal(1, "New Proposal");
Services.Setup(x => x.SaveProposalAsync(It.IsAny<saveProposalParam>())).Callback((saveProposalParam p) =>
{
Assert.That(p.plainProposal, Is.Not.Null);
Assert.That(p.plainProposal.POrderItem.orderItemId, Is.EqualTo(1));
Assert.That(p.plainProposal.POrderItem.orderName, Is.EqualTo("New Proposal"));
});
proposal.State = ObjectStates.Added;
vm.CurrentProposal = proposal;
vm.Save();
}
It is working fine, but if you've noticed, using this mechanism the Assert and Act part of the unit test have switched their parts (Assert comes before Acting). Is there a better way to do this, while preserving correct AAA order?
I'm not sure that you've changed the semantics of the AAA order. Consider the execution of the test. Your mocked interface will not be called until the Action invokes it. Therefore, during execution, your program still follows the Arrange, Act, and Assert flow.
The alternative would be to use Data Injection and create an interface between your CampaignViewModel and the web service that it uses. You can then create a class in your UnitTests that saves your parameter information and Assert on that class member/property rather than use Moq to create a proxy on the fly.
Moq should not be used to simulate storage or assignment. Rather, use Moq to provide dummy mechanisms and values to allow your Unit Tests to execute. If Asserting storage is a requirement, then take the time to create a class that will hold on to your values.
I was wondering whether the object to test should be a field and thus set up during a SetUp method (ie. JUnit, nUnit, MS Test, …).
Consider the following examples (this is C♯ with MsTest, but the idea should be similar for any other language and testing framework):
public class SomeStuff
{
public string Value { get; private set; }
public SomeStuff(string value)
{
this.Value = value;
}
}
[TestClass]
public class SomeStuffTestWithSetUp
{
private string value;
private SomeStuff someStuff;
[TestInitialize]
public void MyTestInitialize()
{
this.value = Guid.NewGuid().ToString();
this.someStuff = new SomeStuff(this.value);
}
[TestCleanup]
public void MyTestCleanup()
{
this.someStuff = null;
this.value = string.Empty;
}
[TestMethod]
public void TestGetValue()
{
Assert.AreEqual(this.value, this.someStuff.Value);
}
}
[TestClass]
public class SomeStuffTestWithoutSetup
{
[TestMethod]
public void TestGetValue()
{
string value = Guid.NewGuid().ToString();
SomeStuff someStuff = new SomeStuff(value);
Assert.AreEqual(value, someStuff.Value);
}
}
Of course, with just one test method, the first example is much too long, but with more test methods, this could be safe quite some redundant code.
What are the pros and cons of each approach? Are there any “Best Practices”?
It's a slippery slope once you start initializing fields & generally setting up the context of your test within the test method itself. This leads to large test methods and really really unmanageable fixtures that don't explain themselves very well.
Instead, you should look at the BDD style naming & test organization. Make one fixture per context, rather than one fixture per system-under-test. Then your [setup] truly does setup the context, and your tests can be simple one-liner asserts.
It's much easier to read when you see a test output that does this:
OrderFulfillmentServiceTests.cs
with_an_order_from_a_new_customer
it should check their credit from the credit service
it should give no discount
with valid credit check
it should decrement inventory
it should ship the goods
with a customer in texas or california
it should add appropriate sales tax
with an order from a gold customer
it should NOT check credit
it should get expedited shipping added for free
Our tests are now really good documentation for our system. Each "with_an..." is a test fixture, and the items below it are tests. Within those, you setup the context (the state of the world as the class name describes) and then the test does the simple assert that verifies what the method name says it does.
The second approach is much more readable, and much easier to visually trace.
However, the first approach means less repetition.
What I've found is that I tend to use the SetUp to create objects (especially for things with a number of dependencies), and then set the values used in the test itself. From experience, this provides about the right amount of code-reuse versus readability/traceability.
From talking with Kent Beck about the design of jUnit I know that Test Classes were a way to share setup between Tests, so using the common initialization was the intent. However, along with that, that means splitting tests that require different setup into separate test classes that have revealing names.
Personally, I use Setup and Teardown methods for two distinct reasons, although I assume that others will have different reasons.
Use Setup and Teardown methods when there is common initiation logic that is used by all tests and a single instance of the object(s) created in the Setup are designed to be reused.
Use Setup and Teardown methods when the time it takes for creating and destroying any object(s) created takes enough time to slow down the unit testing process when repeated in each TestMethod.
To give you an idea of how often I run accross these scenarios, in a project that I am working on now, only two of my test classes (out of about eighty) have an explicit need for Setup and Teardown methods, both times it was to satisfy my second reason due to the 10 second max I have enabled for each test execution.
I also prefer the readability of having the object(s) created and destroyed within the TestMethod, although it is not a breaking or selling point for me.
The approach I take is somewhere in the middle - I use TearDown and SetUp to create a test "sandbox" directory (and delete it when done), as well as to initialize some test member variables with some default values that will be used to test the classes. I then set up some "helper methods" - One is generally called InstantiateClass() I use that to call with the default parameters (if any) which I can override as necessary in each explicit test.
[Test]
public void TestSomething()
{
_myVar = "value";
InstantiateClass();
RunTheClass();
Assert.IsTrue(this, that);
}
In practice, I find set up methods make it hard to reason about a test that is failing and have to scroll to somewhere near the top of the file (which can be very large) to figure out what collaborator has broken (not easy with mocking) and there is no clickable reference to navigate in your IDE. In short, you lose spatial locality.
Static helper methods reveal the collaborators more explicitly, and you avoid fields which unnecessarily widen the scope of variables.