I have a security rule that a newly registered user has full permissions over their own user entity. I'm using Rhino.Security and the code works fine, but I want to create a unit test to make sure the appropriate call is made to setup the permission. Here is a simplified verison of the code:
public User Register(UserRegisterTask userRegistrationTask) {
User user = User.Create(userRegistrationTask);
this.userRepository.Save(user);
// Give this user permission to do operations on itself
this.permissionsBuilderService.Allow("Domain/User")
.For(user)
.On(user)
.DefaultLevel()
.Save();
return user;
}
I've mocked the userRepository and the permissionBuilderService but the fluent interface of the permissionBuilderService requires different objects to be returned from each method call in the chain (i.e. .Allow(...).For(...).On(...) etc). But I can't find a way to mock each of the objects in the chain.
Is there a way to test if the permissionBuilderService's Allow method is being called but ignoring the rest of the chain?
Thanks
Dan
I also ran into this and ended up wrapping the Rhino Security functionality in a service layer for two reasons:
It was making unit testing a real PITA and after spending a couple of hours hitting my head against a brick wall, this approach allowed me to mock this layer far more easily.
I started to feel that Rhino Security was being coupled very tightly to my controller (my application uses MVC). Wrapping the calls in another layer allowed me looser coupling to a specific security implementation and will allow me to easily swap it out with another - if I so choose - in the future.
Obviously, this is only one approach. But it made my life much easier...
Related
Let's say that we have a class Controller that depends on a class Service, and the Service class depends on a class Repository. Only Repository communicates with an external system (say DB) and I know it should be mocked when unit testing is executed.
My question: For unit tests, should I mock the Service class when Controller class is tested even though the Service class doesn't depend on any external systems directly? and Why?
It depends on the kind of test you are writing: integration test or unit test.
I assume you want to write a unit test in this case. The purpose of a unit test is to solely test the business logic of your class so every other dependency should be mocked.
In this case you will mock the Service class. Doing so also allows you to prepare for testing certain scenarios based on the input you are passing to a certain method of Service. Imagine you have a Person findPerson(Long personID)-method in your Service. When testing your Controller you are not interested in doing everything that's necessary for having Service actually return the right output. For a certain test scenario for your Controller you just want it to return a Person whereas for a different test scenario you don't want it to return anything. Mocking makes this very easy.
Also note that if you mock your Service you don't have to mock Repository since your Service is already a mock.
TLDR; When writing a unit test for a certain class, just mock every other dependency to be able to manipulate the output of method invocations made to these dependencies.
Yes, mock Services when testing controllers. Unit tests help to identify the location of a regression. So a Test for a Service code should only fail if the service code has changed, not if the controller code has changed. That way, when the service test fails, you know for sure that the root cause lies in a change to Service.
Also, usually it is much easier to mock the service than to mock all repositories invoked by the service just to test the controller. So it makes your tests easier to maintain.
But in general, you may keep certain util classes unmocked, as you loose more than you gain by mocking those. Also see:
https://softwareengineering.stackexchange.com/questions/148049/how-to-deal-with-static-utility-classes-when-designing-for-testability
As with all engineering questions, TDD is no different. The answer always is, "it depends". There's always trade offs.
In the case of TDD, you develop the test through behavioral expectations first. In my experiences, a behavioral expectation is a unit.
An example, say you want to get all users that start with the last name 'A', and they are active in the system. So you would write a test to create a controller action to get active users that start with 'A' public ActionResult GetAllActiveUsersThatStartWithA().
In the end, I might have something like this:
public ActionResultGetAllActiveUsersThatStartWithA()
{
var users = _repository.GetAllUsers();
var activeUsersThatStartWithA = users.Where(u => u.IsActive && u.Name.StartsWith('A');
return View(activeUsersThatStartWithA);
}
This to me is a unit. I can then now refactor (change me implementation without changing behavior by adding a service class with the method below)
public IEnumerable<User> GetActiveUsersThatStartWithLetter(char startWith)
{
var users = _repository.GetAllUsers();
var activeUsersThatStartWithA = users.Where(u => u.IsActive && u.Name.StartsWith(startsWith);
}
And my new implementation of the controller becomes
public ActionResultGetAllActiveUsersThatStartWithA()
{
return View(_service.GetActiveUsersThatStartWithLetter('A');
}
This is obviously a very contrived example, but it gives an idea of my point. The main benefit of doing it this way is that my tests aren't tied to any implementations details except the repository. Whereas, if I mocked out the service in my tests, I am now tied to that implementation. If for whatever reason that service layer is removed, all my tests break. I would find it more likely that the service layer is more volatile to change than the repository layer.
The other thing to think about is if I do mock out service in my controller class, I could run into a scenario where all my tests are working properly, but the only way I find out the system is broken is through an integration test (meaning that out of process, or assembly components interact with each other), or through production issues.
If for instance I change the implementation of the service class to below:
public IEnumerable<User> GetActiveUsersThatStartWithLetter(char startsWith)
{
throw new Exception();
}
Again, this is a very contrived example, but the point is still relevant. I would not catch this with my controller tests, hence it looks like the system is behaving properly with my passing "unit tests", but in reality the system is not working at all.
The downside with my approach is that the tests could become very cumbersome to setup. So the tradeoff is balancing out test complexity with abstraction/mockable implementations.
Key thing to keep in mind is that TDD gives the benefit of catching regressions, but it's main benefit to is to help design a system. In other words, don't let the design dictate the tests you write. Let the tests dictate the functionality of the system first, then worry about the design through refactoring.
I'm working on a fresh Grails project and recently noticed the default convention in the Spring Security Core generated User class now auto-encodes the password via a beforeInsert/Update event. That's a nice, clean, DRY way of doing the encode, and also makes it impossible to forget to do so.
However, now when trying to write up some unit tests which make use of said User class, I find I either have to mock out the springSecurityService (due to the encode), or more preferably (and cleanly), I'd just override the beforeInsert/Update closure with one that does nothing. Typically in Groovy one can override a method using the ExpandoMetaClass, ala...
User.metaClass.beforeInsert = { /* do nothing */ }
...but I'm finding that the original beforeInsert continues to be called upon creating and saving a new User. This in turn causes my unit tests to blow up. It's trivial for me to just work around this and mock out the service, but the above should work. Am I missing something? Is there something different with GORM's event closures that I'm not picking up on?
In order to improve performance Grails uses reflection directly with caches method handles for invoking events and not Groovy's meta layer. The reason is that if you are saving hundreds of domain instances it can seriously hurt performance if Grails had to go through Groovy's meta layer for each event.
There are ways around this such as defining your own User class that disables the event based on a system/environment property that your test sets etc. but there is currently no way to override the methods via meta programming.
The beforeInsert closure actually is not only a method like toString() or save(), but also it is a pre-defined event supported by Gorm. Override the method will not prevent Gorm from firing the PreInsert event, which leads to the original process.
If necessary, you can replace the code in the beforeInsert with a private method and then override the private method
I have a service reference to a .NET 2.0 web service. I have a reference to this service in my repository and I want to move to Ninject. I've been using DI for some time now, but haven't tried it with a web service like this.
So, in my code, the repository constructor creates two objects: the client proxy for the service, and an AuthHeader object that is the first parameter of every method in the proxy.
The AuthHeader is where I'm having friction. Because the concrete type is required as the first parameter on every call in the proxy, I believe I need to take a dependency on AuthHeader in my repository. Is this true?
I extracted an interface for AuthHeader from my reference.cs. I wanted to move to the following for my repository constructor:
[Inject]
public PackageRepository(IWebService service, IAuthHeader authHeader)
{
_service = service;
_authHeader = authHeader;
}
...but then I can't make calls to my service proxy like
_service.MakeSomeCall(_authheader, "some value").
...because because MakeSomeCall is expecting an AuthHeader, not an IAuthHeader.
Am I square-pegging a round hole here? Is this just an area where there isn't a natural fit (because of web service "awesomeness")? Am I missing an approach?
It's difficult to understand exactly what the question is here, but some general advice might be relevant to this situation:
Dependency injection does not mean that everything has to be an interface. I'm not sure why you would try to extract an interface from a web service proxy generated from WSDL; the types in the WSDL are contracts which you must follow. This is especially silly if the IAuthHeader doesn't have any behaviour (it doesn't seem to) and you'll never have alternate implementations.
The reason why this looks all wrong is because it is wrong; this web service is poorly-designed. Information that's common to all messages (like an authentication token) should never go in the body where it translates to a method parameter; instead it should go in the message header, wherethe ironically-named AuthHeader clearly isn't. Headers can be intercepted by the proxy and inspected prior to executing any operation, either on the client or service side. In WCF that's part of the behavior (generally ClientCredentials for authentication) and in legacy WSE it's done as an extension. Although it's theoretically possible to do this with information in the message body, it's far more difficult to pull off reliably.
In any event, what's really important here isn't so much what your repository depends on but where that dependency comes from. If your AuthHeader is injected by the kernel as a dependency then you're still getting all the benefits of DI - in particular the ability to have this all registered in one place or substitute a different implementation (i.e. a derived class).
So design issues aside, I don't think you have a real problem in your DI implementation. If the class needs to take an AuthHeader then inject an AuthHeader. Don't worry about the exact syntax and type, as long as it takes that dependency as a constructor argument or property.
BRAND NEW to unit testing, I mean really new. I've read quite a bit and am moving slowly, trying to follow best practices as I go. I'm using MS-Test in Visual Studio 2010.
I have come up against a requirement that I'm not quite sure how to proceed on. I'm working on a component that's responsible for interacting with external hardware. There are a few more developers on this project and they don't have access to the hardware so I've implemented a "dummy" or simulated implementation of the component and moved as much shared logic up into a base class as possible.
Now this works fine as far as allowing them to compile and run the code, but it's not terrible useful for simulating the events and internal state changes needed for my unit tests (don't forget I'm new to testing)
For example, there are a couple events on the component that I want to test, however I need them to be invoked in order to test them. Normally to raise the event I would push a button on the hardware or shunt two terminals, but in the simulated object (obviously) I can't do that.
There are two concerns/requirements that I have:
I need to provide state changes and raise events for my unit tests
I need to provide state changes and raise events for my team to test dependencies on the component (e.g. a button on a WPF view becomes enabled when a certain hardware event occurs)
For the latter I thought about some complicated control panel dialog that would let me trigger events and generally simulate hardware operation and user interaction. This is complicated as it requires a component with no message pump to provide a window with controls. Stinky. Or another approach could be to implement the simulated component to take a "StateInfo" object that I could use to change the internals of the object.
This can't be a new problem, I'm sure many of you have had to do something similar to this and I'm just wondering what patterns or strategies you've used to accomplish this. I know I can access private fields with a n accessor, but this doesn't really provide an interactive (in the case of runtime simulation) changes.
If there is an interface on the library you use to interact with the external hardware you can just create a mock object for it and raise events from that in your unit tests.
If there isn't, then you'll need to wrap the hardware calls in a wrapper class so you mock it and provide the behaviours you want in your test.
For examples of how to raise events from mock objects have a look at Mocking Comparison - Raising Events
I hope that helps!
I have some written a number of unit tests that test a wrapper around a FTP server API.
Both the unit tests and the FTP server are on the same machine.
The wrapper API gets deployed to our platform and are used in both remoting and web service scenarios. The wrapper API essentially takes XML messages to perform tasks such as adding/deleting/updating users, changing passwords, modifying permissions...that kinda thing.
In a unit test, say to add a user to a virtual domain, I create the XML message to send to the API. The API does it's work and returns a response with status information about whether the operation was successful or failed (error codes, validation failures etc).
To verify whether the API wrapper code really did do the right thing (if the response indicated success), I invoke the FTP server's COM API and query its store directly to see if, for example when creating a user account, the user account really did get created.
Does this smell bad?
Update 1: #Jeremy/Nick: The wrapper is the focus of the testing, the FTP server and its COM API are 3rd party products, presumably well tested and stable. The wrapper API has to parse the XML message and then invoke the FTP server's API. How would I verify, and this may be a silly case, that a particular property of the user account is set correctly by the wrapper. For example setting the wrong property or attribute of an FTP account due to a typo in the wrapper code. A good example being setting the upload and download speed limits, these may get transposed in the wrapper code.
Update 2: thanks all for the answers. To the folks who suggested using mocks, it had crossed my mind, but the light hasn't switched on there yet and I'm still struggling to get my head round how I would get my wrapper to work with a mock of the FTP server. Where would the mocks reside and do I pass an instance of said mocks to the wrapper API to use instead of calling the COM API? I'm aware of mocking but struggling to get my head round it, mostly because I find most of the examples and tutorials are so abstract and (I'm ashamed to say) verging on the incomprehensible.
You seem to be mixing unit & component testing concerns.
If you're unit-testing your wrapper, you should use a mock FTP server and don't involve the actual server. The plus side is, you can usually achieve 100% automation like this.
If you're component-testing the whole thing (the wrapper + FTP server working together), try to verify your results at the same level as your tests i.e. by means of your wrapper API. For example, if you issue a command to upload a file, next, issue a command to delete/download that file to make sure that the file was uploaded correctly. For more complex operations where it's not trivial to test the outcome, then consider resorting to the COM API "backdoor" you mentioned or perhaps involve some manual verification (do all of your tests need to be automated?).
To verify whether the API wrapper code really did do the right thing (if the response indicated success), I invoke the FTP server's COM API
Stop right there. You should be mocking the FTP server and the wrapper should operate against the mock.
If your test runs both the wrapper and the FTP server, you are not Unit Testing.
To test your wrapper with a mock object, you can do the following:
Write a COM object that has the same interface as the FTP server's COM API. This will be your mock object. You should be able to interchange the real FTP server and your mock object by passing the interface pointer of either to your wrapper by means of dependency injection.
Your mock object should implement hard-coded behaviour based on the methods called on its interface (which mimics FTP server API) and also based on the argument values used:
For example, if you have an UploadFile method you can blindly return a success result and perhaps store the file name that was passed in in an array of strings.
You could simulate an upload error when you encounter a file name with "error" in it.
You could simulate latency/timeout when you encounter a file name with "slow" in it.
Later on, the DownloadFile method could check the internal string array to see if a file with that name was already "uploaded".
The pseudo-code for some test cases would be:
//RealServer theRealServer;
//FtpServerIntf ftpServerIntf = theRealServer.getInterface();
// Let's test with our mock instead
MockServer myMockServer;
FtpServerIntf ftpServerIntf = myMockServer.getInterface();
FtpWrapper myWrapper(ftpServerIntf);
FtpResponse resp = myWrapper.uploadFile("Testing123");
assertEquals(FtpResponse::OK, resp);
resp = myWrapper.downloadFile("Testing123");
assertEquals(FtpResponse::OK, resp);
resp = myWrapper.downloadFile("Testing456");
assertEquals(FtpResponse::NOT_FOUND, resp);
resp = myWrapper.downloadFile("SimulateError");
assertEquals(FtpResponse::ERROR, resp);
I hope this helps...
I agree with Nick and Jeremy about not touching the API. I would look at mocking the API.
http://en.wikipedia.org/wiki/Mock_object
If it's .NET you can use:
Moq: http://code.google.com/p/moq/
And a bunch of other mocking libraries.
What are you testing the wrapper or the API. The API should work as is, so you don't need to test it I would think. Focus your testing efforts on the wrapper and pretend like the API doesn't exist, when I write a class that does file access I don't unit test the build in streamreader...I focus on my code.
I would say your API should be treated just like a database or a network connection when testing. Don't test it, it isn't under your control.
It doesn't sound like you're asking "Should I test the API?" — you're asking "Should I use the API to verify whether my wrapper is doing the right thing?"
I say yes. Your unit tests should assert that your wrapper passes along the information reported by the API. In the example you give, for instance, I don't know how you would avoid touching the API. So I don't think it smells bad.
The only time I can think of when it might make sense to dip into the lower level API to verify results if if the higher-level API is write-only. For example, if you can create a user using the high-level API, then there should be a high-level API to get the user accounts, too. Use that.
Other folks have suggested mocking the lower-level API. That's good, if you can do it. If the lower-level component is mocked, checking the mocks to make sure the right state is set should be okay.