How to use Moq to simulate exceptions thrown by the StackExchange.Redis library? - unit-testing

I'm working on a project where I would like to use the StackExchange.Redis library to connect to a Redis database. In addition, I'd like to use Moq for the mocking library in our unit tests, but I'm running into a roadblock that I need some help to resolve.
My specific issue, is that I have a command/response architecture and I'm trying to test a cache "wrapper" that checks if a command can be satisfied by the cache before doing the actual work to return the response. An important consideration is that cache failures should never stop a request from being handled, so I want to to mock out certain failures for the StackExchange.Redis IDatabase interface. However, I can't seem to figure out how to throw a RedisException from my mocked object. For example, I'd like to just use the following code, but it doesn't compile because there's no public constructor for the exceptions thrown by this library.
Mock<IDatabase> _mockCache = new Mock<IDatabase>();
// Simulate a connection exception to validate how the cache layer handles it
// THIS DOESN'T COMPILE THOUGH
//
_mockCache.Setup(cache => cache.StringGetAsync("a", CommandFlags.None)).Throws<RedisException>();
Is there a better way to mock up failures within the StackExchange.Redis library? I feel like I'm missing something obvious either with Moq or with how to test against this library.

You can always use FormatterServices.GetUninitializedObject to create an instance of a class without running any constructor:
Creates a new instance of the specified object type.
Because the new instance of the object is initialized to zero and no constructors are run, the object might not represent a state that is regarded as valid by that object. The current method should only be used for deserialization when the user intends to immediately populate all fields. It does not create an uninitialized string, since creating an empty instance of an immutable type serves no purpose.
So your setup could look like this:
var e = (RedisException)FormatterServices.GetUninitializedObject(typeof(RedisException));
Mock<IDatabase> _mockCache = new Mock<IDatabase>();
_mockCache.Setup(cache => cache.StringGetAsync("a", CommandFlags.None)).Throws(e);
It's a little hackish; since no constructors are run, you should not rely on a valid state of the RedisException instance.

Related

Is there a standard for whether a class under test should be constructed in a fixture or in a test?

I'm just curious, are there any standard guidelines that state whether an instance of a class under test should be constructed in a fixture or in the actual test case?
Thanks!
I'm not aware of a standard reference on that topic. Here's what I'd do:
If I had only one test to write, or if I needed an instance of the class under test that was constructed differently than any other instance of that class in my test suite, I'd just instantiate it in the test. Why make it any more complicated that you have to? If I needed to use the same instance over and over again, I'd put it in a fixture.
I do think it's important to construct only the fixtures you need for a given test case, so that there's nothing to mislead the reader. That means either using whatever scoping mechanism your test framework provides (e.g. an rspec context block or a whole new xUnit TestCase) to construct a given fixture only before the tests that need it, or moving instance construction from fixtures to test. To avoid duplication, you can always write a method to construct an instance and call it from as many tests as you want.
I tend to avoid putting anything inside a fixture.
After a while the CUT state tends to get out of hand as the number of tests in that fixture increase. Each test require a simialr but different behavior which could or could not be added to some initialization/setup method.
Having the CUT in the fixture level creates a shared states between the tests which can cause test failure due to run order - which is a pain to find and fix.
Another readability issue happens when a test fails - people tend to forget the initialization that might have happened in another method.
There are better ways to avoid code duplication - Using AutoMocking container to create objects with fake parameters or Factory methods which enable a different initialization for each test (if required) and create more readable and maintainable tests.

Should dynamic dependencies of service objects be avoided?

This question is about testable software design based on mostly value objects and services.
Services that have static dependencies are straightforward to instantiate or configure when using a DI container. However, in some cases, services require dependencies that are known at runtime only.
Say, imagine a simple FileSystemDataStore with some CRUD methods in it for managing files in a directory. This service will need a directory name as one of its constructor parameters. That name could be known at runtime only and will have to be provided by its collaborators.
This seems to be somewhat of a problem because you can't configure such service in a DI container because of its dynamic nature. You'll probably have to use a factory to create such services. However, this will result in a quirk in the unit tests of the service's clients. You will have to mock the factory to return a mock of the service. This adds additional complexity to unit tests. Mocks returning mocks is often considered a test smell.
What is your opinion about this problem? Is it even a problem in your experience? Should such services be instead refactored to be more "pure"?
As a general observation, when services depend on run-time values, an Abstract Factory is indeed the appropriate response.
However, as pointed out in the question, this does have an impact on the maintainability of the tests, so if you can redesign the API to avoid such situations, you should do that. It's not always possible, though.
You would like to inject the directory name, but it is not known during the construction phase. I see three options here.
1. Inject a Provider
Instead of saying "Here is the directory name you need" you are saying "Here is an object that can give you the directory name at run-time". The way to implement this is to declaring a constructor argument Provider<String> directoryNameProvider. The constructor stores a reference to this provider as a member variable. When called apon to do some real-work in the run phase, the class would contain code like this when the directory name is needed:
directoryName = directoryNameProvider.get();
In java, the interface you implement is [javax.inject.Provider<T>][1]. This has a single method: get() which returns type T. The use of the generic provider interface means you do not have a proliferation of intefaces.
When it comes to your unit test, you can inject an anonymous inner class that implements the single method of Provider<T> to return a constant value easily enough. Our code base has a SimpleProvider<T> class that wraps a given object in the Provider interface.
Pro: Allows you to construct the object in the main construction phase. Unit testing is pretty easy.
Con: Details about dependency creation issues are leaking into the class when they should entirely be the concern of the factory. Too bad if the class is already written and accepts directoryName rather than directoryNameProvider already.
Despite the seemingly long list of cons, this is an option I use alot. It is my opinion that there is a missing language construct here.
2. Construct the troublesome object later
You can enter an inner scope when you know more. Within a run-phase method, you can enter a new scope. This means that you go through a whole new mini-construction phase, and then a mini-run phase. Ths is similiar to what happens in your application main() but at a smaller level.
Pro: Class receiving the dependency remains pure.
Con: Entering and exiting too many scopes can make the application and object life-cycles difficult to understand.
3. Use a method argument
You can decide that directoryName is to be a method argument and pass it to your class during the run phase rather than trying to inject it as a constructor argument. This is effectively deciding not to use dependency inject style for this occasion.
Pro: Simplicity
Con: Class that passes directoryName as a method parameter is tightly coupled to the class that needs it. It will be very difficult to implement an alternate implementation that depends on say, a database connection.
These are matters that I have been considering alot lately, so I'm interested in any comments or edits. Are there any other options?

What are strict and non-strict mocks?

I have started using moq for mocking. Can someone explain me the concept of strict and non-strict mocks? How can they can be used in moq?
edit:
in which scenario do we use which type of mock?
I'm not sure about moq specifically, but here's how strict mocks work in Rhino. I declare that I expect a call to foo.Bar on my object foo:
foo.Expect(f => f.Bar()).Returns(5);
If the calling code does
foo.Bar();
then I'm fine because the expectations are exactly met.
However, if the calling code is:
foo.Quux(12);
foo.Bar();
then my expectation failed because I did not explicitly expect a call to foo.Quux.
To summarize, a strict mock will fail immediately if anything differs from the expectations. On the other hand, a non-strict mock (or a stub) will gladly "ignore" the call to foo.Quux and it should return a default(T) for the return type T of foo.Quux.
The creator of Rhino recommends that you avoid strict mocks (and prefer stubs) because you generally don't want your test to fail when receiving an unexpected call as above. It makes refactoring your code much more difficult when you have to fix dozens of test that relied on the exact original behavior.
Ever come across Given / When / Then?
Given a context
When I perform some events
Then an outcome should occur
This pattern appears in BDD's scenarios, and is also relevant for unit tests.
If you're setting up context, you're going to use the information which that context provides. For instance, if you're looking up something by Id, that's context. If it doesn't exist, the test won't run. In this case, you want to use a NiceMock or a Stub or whatever - Moq's default way of running.
If you want to verify an outcome, you can use Moq's verify. In this case, you want to record the relevant interactions. Fortunately, this is also Moq's default way of running. It won't complain if something happens that you weren't interested in for that test.
StrictMock is there for when you want no unexpected interactions to occur. It's how old-style mocking frameworks used to run. If you're doing BDD-style examples, you probably won't want this. It has a tendency to make tests a bit brittle and harder to read than if you separate the aspects of behaviour you're interested in. You have to set up expectations for both the context and the outcome, for all outcomes which will occur, regardless of whether they're of interest or not.
For instance, if you're testing a controller and mocking out both your validator and your repository, and you want to verify that you've saved your object, with a strict mock you also have to verify that you've validated the object first. I prefer to see those two aspects of behaviour in separate examples, because it makes it easier for me to understand the value and behaviour of the controller.
In the last four years I haven't found a single example which required the use of a strict mock - either it was an outcome I wanted to verify (even if I verify the number of times it's called) or a context for which I can tell if I respond correctly to the information provided. So in answer to your question:
non-strict mock: usually
strict mock: preferably never
NB: I am strongly biased towards BDD, so hard-core TDDers may disagree with me, and it will be right for the way that they are working.
Here's a good article.
I usually end up having something like this
public class TestThis {
private final Collaborator1 collaborator1;
private final Collaborator2 collaborator2;
private final Collaborator2 collaborator3;
TestThis(Collaborator1 collaborator1, Collaborator2 collaborator2, Collaborator3 collaborator3) {
this.collaborator1 = collaborator1;
this.collaborator2 = collaborator2;
this.collaborator3 = collaborator3;
}
public Login login(String username) {
User user = collaborator1.getUser(username);
collaborator2.notify(user);
return collaborator3.login(user);
}
}
...and I use Strict mocks for the 3 collaborators to test login(username). I don't see how Strict Mocks should never be used.
I have a simple convention:
Use strict mocks when the system under test (SUT) is delegating the call to the underlying mocked layer without really modifying or applying any business logic to the arguments passed to itself.
Use loose mocks when the SUT applies business logic to the arguments passed to itself and passes on some derived/modified values to the mocked layer.
For eg:
Lets say we have database provider StudentDAL which has two methods:
Data access interface looks something like below:
public Student GetStudentById(int id);
public IList<Student> GetStudents(int ageFilter, int classId);
The implementation which consumes this DAL looks like below:
public Student FindStudent(int id)
{
//StudentDAL dependency injected
return StudentDAL.GetStudentById(id);
//Use strict mock to test this
}
public IList<Student> GetStudentsForClass(StudentListRequest studentListRequest)
{
//StudentDAL dependency injected
//age filter is derived from the request and then passed on to the underlying layer
int ageFilter = DateTime.Now.Year - studentListRequest.DateOfBirthFilter.Year;
return StudentDAL.GetStudents(ageFilter , studentListRequest.ClassId)
//Use loose mock and use verify api of MOQ to make sure that the age filter is correctly passed on.
}

State/Interaction testing and confusion on mixing (or abusing) them

I think understand the definition of State / Interaction based testing (read the Fowler thing, etc). I found that I started state based but have been doing more interaction based and I'm getting a bit confused on how to test certain things.
I have a controller in MVC and an action calls a service to deny a package:
public ActionResult Deny(int id)
{
service.DenyPackage(id);
return RedirectToAction("List");
}
This seems clear to me. Provide a mock service, verify it was called correctly, done.
Now, I have an action for a view that lets the user associate a certificate with a package:
public ActionResult Upload(int id)
{
var package = packageRepository.GetPackage(id);
var certificates = certificateRepository.GetAllCertificates();
var view = new PackageUploadViewModel(package, certificates);
return View(view);
}
This one I'm a bit stumped on. I'm doing Spec style tests (possibly incorrectly) so to test this method I have a class and then two tests: verify the package repository was called, verify the certificate repository was called. I actually want a third to test to verify that the constructor was called but have no idea how to do that! I'm get the impression this is completely wrong.
So for state based testing I would pass in the id and then test the ActionResult's view. Okay, that makes sense. But wouldn't I have a test on the PackageUploadViewModel constructor? So if I have a test on the constructor, then part of me would just want to verify that I call the constructor and that the action return matches what the constructor returns.
Now, another option I can think of is I have a PackageUploadViewModelBuilder (or something equally dumbly named) that has dependency on the two repositories and then I just pass the id to a CreateViewModel method or something. I could then mock this object, verify everything, and be happy. But ... well ... it seems extravagant. I'm making something simple ... not simple. Plus, controller.action(id) returning builder.create(id) seems like adding a layer for no reason (the controller is responsible for building view models.. right?)
I dunno... I'm thinking more state based testing is necessary, but I'm afraid if I start testing return values then if Method A can get called in 8 different contexts I'm going to have a test explosion with a lot of repetition. I had been using interaction based testing to pass some of those contexts to Method B so that all I have to do is verify Method A called Method B and I have Method B tested so Method A can just trust that those contexts are handled. So interaction based testing is building this hierarchy of tests but state based testing is going to flatten it out some.
I have no idea if that made any sense.
Wow, this is long ...
I think Roy Osherove recently twitted that as a rule of thumb, your tests should be 95 percent state-based and 5 percent interaction-based. I agree.
What matters most is that your API does what you want it to, and that is what you need to test. If you test the mechanics of how it achieves what it needs to do, you are very likely to end up with Overspecified Tests, which will bite you when it comes to maintainability.
In most cases, you can design your API so that state-based testing is the natural choice, because that is just so much easier.
To examine your Upload example: Does it matter that GetPackage and GetAllCertificates was called? Is that really the expected outcome of the Upload method?
I would guess not. My guess is that the purpose of the Upload method - it's very reason for existing - is to populate and serve the correct View.
So state-based testing would examine the returned ViewResult and its ViewModel and verify that it has all the correct values.
Sure, as the code stands right now, you will need to provide Test Doubles for packageRepository and certificateRepository, because otherwise exceptions will be thrown, but it doesn't look like it is important in itself that the repository methods are being called.
If you use Stubs instead of Mocks for your repositories, your tests are no longer tied to internal implementation details. If you later on decide to change the implementation of the Upload method to use cached instances of packages (or whatever), the Stub will not be called, but that's okay because it's not important anyway - what is important is that the returned View contains the expected data.
This is much more preferrable than having the test break even if all the returned data is as it should be.
Interestingly, your Deny example looks like a prime example where interaction-based testing is still warranted, because it is only by examining Indirect Outputs that you can verify that the method performed the correct action (the DenyPackage method returns void).
All this, and more, is explained very well in the excellent book xUnit Test Patterns.
The question to ask is "if this code worked, how could I tell?" That might mean testing some interactions or some state, it depends on what's important.
In your first test, the Deny changes the world outside the target class. It requires a collaboration from a service, so testing an interaction makes sense. In your second test, you're making queries on the neighbours (not changing anything outside the target class), so stubbing them makes more sense.
That's why we have a heuristic of "Stub Queries, Mock Actions" in http://www.mockobjects.com/book

unit testing data storage

Suppose I have an interface with methods 'storeData(key, data)' and 'getData(key)'. How should I test a concrete implementation? Should I check if the data was correctly set in the storage medium (eg an sql database) or should I just check whether or not it gives the correct data back by using getData?
If I look up the data in the database it feels like I'm also testing the internals of the method but only checking whether it gives the same data back feels incomplete.
You seem to be caught up in the hype of unit testing, what you will be doing is actually an integration test. Setting and getting back the same value from the same key is a unit test you'd do with a mock implementation of the storage engine, but actually testing the real storage, say your database, as you should, that is no longer a unit test, but it is a fundamental part of testing, and it sounds like integration testing to me. Don't use unit testing as your hammer, choose the right tools for the right job. Divide your testing into more layers.
What you want to do in a unit test is make sure that the method does the job that it is supposed to do. If the method uses dependencies to accomplish it's work, you would mock those dependencies out and make sure that your method calls the methods on the objects it depends on with the appropriate arguments. This way you test your code in isolation.
One of the benefits to this is that it will drive the design of your code in a better direction. In order to use mocking, for example, you naturally gravitate towards more decoupled code using dependency injection. This gives you the ability to easily substitute your mock objects for the actual objects that your class depends on. You also end up implementing interfaces, which are more naturally mocked. Both of these things are good design patterns and will improve your code.
In order to test your particular example, for instance, you might have your class depend on a factory to create connections to the database and a builder to construct parameterized SQL commands that are executed via the connection. You'd pass these mocked versions of these objects to your class and ensure that the correct methods to set up the connection and command, build the correct command, execute it, and tear down the connection were invoked. Or perhaps, you inject an already open connection and simply build the command and invoke it. The point is your class is built against an interface or set of interfaces and you use mocking to supply objects that implement those interfaces and can record invocations and supply correct return values to the methods that you expect to use from the interface(s).
In cases like this I will usually create SetUp and TearDown methods that fire before/after my unit tests. These methods will set up any test data I need in the db and delete any test data when I'm done. Pseudo code example:
Const KEY1 = "somekey"
Const VALUE1= "somevalue"
Const KEY2 = "somekey2"
Const VALUE2= "somevalue2"
Sub SetUpUnitTests()
{
Insert Into SQLTable(KEY1,VALUE1)
}
//this test is not dependent on the setData Method
Sub GetDataTest()
{
Assert.IsEqual(getData(KEY1),VALUE1)
}
//this test is not dependent on getData Method
Sub SetDataTest()
{
storeData(newKey,NewData)
Assert.IsNotNull(Direct Call to SQL [Select data from table where key=KEY2])
}
Sub TearDownUnitTests()
{
Delete From table Where key in (KEY1, KEY2)
}
Testing both in concert is a common technique (at least, in my experience), and I wouldn't shy away from it. I've used this same pattern for serializing/deserializing and parsing and printing.
If you don't want to hit the database, you could use a database mock. Some people have the same feelings as you when using mocks - it is partly implementation specific. As in all things, it's a trade-off: consider the benefits of mocking (faster, not db dependent) vs its downsides (won't detect actual db problems, slower).
I think it depends on what happens to the data later - if you're only ever going to access the data using storeData and getData, why not test the methods in concert? I suppose there's a chance that a bug will arise and it'll be slightly harder to figure out whether it's in storeData or getData, but I'd consider that an acceptable risk if it
makes your test easier to implement, and
conceals the internals, as you say
If the data will be read from, or inserted into, the database using some other mechanism, then I'd check the database using SQL as you suggest.
#brendan makes a good point, though - whichever method you decide on, you'll be inserting data in the database. It's a good idea to clear out the data before and after the tests to ensure that you can achieve consistent results.