How to set the ErrorCode of a ManagementException? - unit-testing

I want to handle a ManagementException exception for a specific ErrorCode only and am having trouble writing the unit test for it. Ordinarily, I would write the test so that it is something like the following:
Searcher search = MockRepository.GenerateMock<Searcher>();
// wrapper for ManagementObjectSearcher
...
search.Expect(s => s.Get()).Throw(new ManagementException());
...
However, this doesn't set the ErrorCode to the one that I want in particular, indeed ManagementException doesn't have a constructor which sets this value.
How can this be done?
(Note that I am using RhinoMocks as my mocking framework but I am assuming that this is framework independent; all I need to know here is how to create a ManagementException which has a specific ErrorCode value. Also I have found some references to a System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) method online but this doesn't appear to be publicly accessible).

The least effort to get over this hurdle would be a static helper / utility method that uses reflection to hack-slot in the required error code. Using the most excellent Reflector, I see there is a private "errorCode" field, which is only set via internal ctors defined in ManagementException. So :)
public static class EncapsulationBreaker
{
public static ManagementException GetManagementExceptionWithSpecificErrorCode(ManagementStatus statusToBeStuffed)
{
var exception = new ManagementException();
var fieldInfo = exception.GetType().GetField("errorCode",
BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField | BindingFlags.DeclaredOnly);
fieldInfo.SetValue(exception, statusToBeStuffed);
return exception;
}
}
Verified that it works
[Test]
public void TestGetExceptionWithSpecifiedErrorCode()
{
var e = EncapsulationBreaker.GetManagementExceptionWithSpecificErrorCode(ManagementStatus.BufferTooSmall);
Assert.AreEqual(ManagementStatus.BufferTooSmall, e.ErrorCode);
}
Although I generally frown upon reflection in tests, this is one of the rare cases where it is needed / useful.
HTH

Derive a class from ManagementException and hide the error code implementation with your own. Have your mock return this class.

Have a very simple and small method or class which catches that exception and gets the error code out of it, and then passes that on to the real class that does the work. Under test, replace that code with directly passing to the real class what you would pass when you get that error code.
The most obvious way is to subclass the exception, but if that doesn't work, then code which catches it, and immediately throws your own exception that does allow you to expose that code would be another option.

I would subclass ManagementException and in the subclass override the ErrorCode getter (if the normal protection levels stop you from doing that, maybe introspection can get you closer). Any code that handles ManagementException but has never heard about your specific subclass should handle your subclass "as if" it was the ManagementException that you're trying to simulate for testing purposes, after all.
Edit: it's conceivable that ErrorCode just cannot be overridden (I hate languages that are SO rigid that they can stop testing in this way, but cannot deny they exist;-). In this case, Dependency Injection can still save you -- DI is one of my favorite patterns for testing.
DI's purpose in testing is to decouple the code under test from rigid assumptions that would inhibit testability -- and that's just what we have here, albeit in an unusual form. Your code under test currently does, say, x.ErrorCode to obtain the error code of exception x. Very well, it must then do, instead, getErrorCode(x) where getErrorCode is a delegate which normally just does return x.ErrorCode; and it must have a setter for the getErrorCode delegate, so that for testing purposes, you can change it to a delegate which does return 23 (or whatever error-code value you want to simulate for testing).
Details can vary, but dependency injection can (among other things) help compensate for some kinds of excessive rigidity in objects you inevitably get from the system (or from other libraries &c that you cannot directly modify), as in this example.

Related

Is it bad practice to unit test a method that is calling another method I am already testing?

Consider you have the following method:
public Foo ParseMe(string filepath)
{
// break up filename
// validate filename & extension
// retrieve info from file if it's a certain type
// some other general things you could do, etc
var myInfo = GetFooInfo(filename);
// create new object based on this data returned AND data in this method
}
Currently I have unit tests for GetFooInfo, but I think I also need to build unit tests for ParseMe. In a situation like this where you have a two methods that return two different properties - and a change in either of them could break something - should unit tests be created for both to determine the output is as expected?
I like to err on the side of caution and be more wary about things breaking and ensuring that maintenance later on down the road is easier, but I feel very skeptical about adding very similar tests in the test project. Would this be bad practice or is there any way to do this more efficiently?
I'm marking this as language agnostic, but just in case it matters I am using C# and NUnit - Also, I saw a post similar to this in title only, but the question is different. Sorry if this has already been asked.
ParseMe looks sufficiently non-trivial to require a unit test. To answer your precise question, if "you have a two methods that return two different properties - and a change in either of them could break something" you should absolutely unit test them.
Even if the bulk of the work is in GetFooInfo, at minimum you should test that it's actually called. I know nothing about NUnit, but I know in other frameworks (like RSpec) you can write tests like GetFooInfo.should be_called(:once).
It is not a bad practice to test a method that is calling another method. In fact, it is a good practice. If you have a method calling another method, it is probably performing additional functionality, which should be tested.
If you find yourself unit testing a method that calls a method that is also being unit tested, then you are probably experiencing code reuse, which is a good thing.
I agree with #tsm - absolutely test both methods (assuming both are public).
This may be a smell that the method or class is doing too much - violating the Single Responsibility Principle. Consider doing an Extract Class refactoring and decoupling the two classes (possibly with Dependency Injection). That way you could test both pieces of functionality independently. (That said, I'd only do that if the functionality was sufficiently complex to warrant it. It's a judgment call.)
Here's an example in C#:
public interface IFooFileInfoProvider
{
FooInfo GetFooInfo(string filename);
}
public class Parser
{
private readonly IFooFileInfoProvider _fooFileInfoProvider;
public Parser(IFooFileInfoProvider fooFileInfoProvider)
{
// Add a null check
_fooFileInfoProvider = fooFileInfoProvider;
}
public Foo ParseMe(string filepath)
{
string filename = Path.GetFileName(filepath);
var myInfo = _fooFileInfoProvider.GetFooInfo(filename);
return new Foo(myInfo);
}
}
public class FooFileInfoProvider : IFooFileInfoProvider
{
public FooInfo GetFooInfo(string filename)
{
// Do I/O
return new FooInfo(); // parameters...
}
}
Many developers, me included, take a programming by contract approach. That requires you to consider each method as a black box. If the method delegates to another method to accomplish its task does not matter, when you are testing the method. But you should also test all large or complicated parts of your program as units. So whether you need to unit test the GetFooInfo depends on how complicated that method is.

What are strict and non-strict mocks?

I have started using moq for mocking. Can someone explain me the concept of strict and non-strict mocks? How can they can be used in moq?
edit:
in which scenario do we use which type of mock?
I'm not sure about moq specifically, but here's how strict mocks work in Rhino. I declare that I expect a call to foo.Bar on my object foo:
foo.Expect(f => f.Bar()).Returns(5);
If the calling code does
foo.Bar();
then I'm fine because the expectations are exactly met.
However, if the calling code is:
foo.Quux(12);
foo.Bar();
then my expectation failed because I did not explicitly expect a call to foo.Quux.
To summarize, a strict mock will fail immediately if anything differs from the expectations. On the other hand, a non-strict mock (or a stub) will gladly "ignore" the call to foo.Quux and it should return a default(T) for the return type T of foo.Quux.
The creator of Rhino recommends that you avoid strict mocks (and prefer stubs) because you generally don't want your test to fail when receiving an unexpected call as above. It makes refactoring your code much more difficult when you have to fix dozens of test that relied on the exact original behavior.
Ever come across Given / When / Then?
Given a context
When I perform some events
Then an outcome should occur
This pattern appears in BDD's scenarios, and is also relevant for unit tests.
If you're setting up context, you're going to use the information which that context provides. For instance, if you're looking up something by Id, that's context. If it doesn't exist, the test won't run. In this case, you want to use a NiceMock or a Stub or whatever - Moq's default way of running.
If you want to verify an outcome, you can use Moq's verify. In this case, you want to record the relevant interactions. Fortunately, this is also Moq's default way of running. It won't complain if something happens that you weren't interested in for that test.
StrictMock is there for when you want no unexpected interactions to occur. It's how old-style mocking frameworks used to run. If you're doing BDD-style examples, you probably won't want this. It has a tendency to make tests a bit brittle and harder to read than if you separate the aspects of behaviour you're interested in. You have to set up expectations for both the context and the outcome, for all outcomes which will occur, regardless of whether they're of interest or not.
For instance, if you're testing a controller and mocking out both your validator and your repository, and you want to verify that you've saved your object, with a strict mock you also have to verify that you've validated the object first. I prefer to see those two aspects of behaviour in separate examples, because it makes it easier for me to understand the value and behaviour of the controller.
In the last four years I haven't found a single example which required the use of a strict mock - either it was an outcome I wanted to verify (even if I verify the number of times it's called) or a context for which I can tell if I respond correctly to the information provided. So in answer to your question:
non-strict mock: usually
strict mock: preferably never
NB: I am strongly biased towards BDD, so hard-core TDDers may disagree with me, and it will be right for the way that they are working.
Here's a good article.
I usually end up having something like this
public class TestThis {
private final Collaborator1 collaborator1;
private final Collaborator2 collaborator2;
private final Collaborator2 collaborator3;
TestThis(Collaborator1 collaborator1, Collaborator2 collaborator2, Collaborator3 collaborator3) {
this.collaborator1 = collaborator1;
this.collaborator2 = collaborator2;
this.collaborator3 = collaborator3;
}
public Login login(String username) {
User user = collaborator1.getUser(username);
collaborator2.notify(user);
return collaborator3.login(user);
}
}
...and I use Strict mocks for the 3 collaborators to test login(username). I don't see how Strict Mocks should never be used.
I have a simple convention:
Use strict mocks when the system under test (SUT) is delegating the call to the underlying mocked layer without really modifying or applying any business logic to the arguments passed to itself.
Use loose mocks when the SUT applies business logic to the arguments passed to itself and passes on some derived/modified values to the mocked layer.
For eg:
Lets say we have database provider StudentDAL which has two methods:
Data access interface looks something like below:
public Student GetStudentById(int id);
public IList<Student> GetStudents(int ageFilter, int classId);
The implementation which consumes this DAL looks like below:
public Student FindStudent(int id)
{
//StudentDAL dependency injected
return StudentDAL.GetStudentById(id);
//Use strict mock to test this
}
public IList<Student> GetStudentsForClass(StudentListRequest studentListRequest)
{
//StudentDAL dependency injected
//age filter is derived from the request and then passed on to the underlying layer
int ageFilter = DateTime.Now.Year - studentListRequest.DateOfBirthFilter.Year;
return StudentDAL.GetStudents(ageFilter , studentListRequest.ClassId)
//Use loose mock and use verify api of MOQ to make sure that the age filter is correctly passed on.
}

Unit testing factory methods which have a concrete class as a return type

So I have a factory class and I'm trying to work out what the unit tests should do. From this question I could verify that the interface returned is of a particular concrete type that I would expect.
What should I check for if the factory is returning concrete types (because there is no need - at the moment - for interfaces to be used)? Currently I'm doing something like the following:
[Test]
public void CreateSomeClassWithDependencies()
{
// m_factory is instantiated in the SetUp method
var someClass = m_factory.CreateSomeClassWithDependencies();
Assert.IsNotNull(someClass);
}
The problem with this is that the Assert.IsNotNull seems somewhat redundant.
Also, my factory method might be setting up the dependencies of that particular class like so:
public SomeClass CreateSomeClassWithDependencies()
{
return new SomeClass(CreateADependency(), CreateAnotherDependency(),
CreateAThirdDependency());
}
And I want to make sure that my factory method sets up all these dependencies correctly. Is there no other way to do this then to make those dependencies public/internal properties which I then check for in the unit test? (I'm not a big fan of modifying the test subjects to suit the testing)
Edit: In response to Robert Harvey's question, I'm using NUnit as my unit testing framework (but I wouldn't have thought that it would make too much of a difference)
Often, there's nothing wrong with creating public properties that can be used for state-based testing. Yes: It's code you created to enable a test scenario, but does it hurt your API? Is it conceivable that other clients would find the same property useful later on?
There's a fine line between test-specific code and Test-Driven Design. We shouldn't introduce code that has no other potential than to satisfy a testing requirement, but it's quite alright to introduce new code that follow generally accepted design principles. We let the testing drive our design - that's why we call it TDD :)
Adding one or more properties to a class to give the user a better possibility of inspecting that class is, in my opinion, often a reasonable thing to do, so I don't think you should dismiss introducing such properties.
Apart from that, I second nader's answer :)
If the factory is returning concrete types, and you're guaranteeing that your factory always returns a concrete type, and not null, then no, there isn't too much value in the test. It does allows you to make sure, over time that this expectation isn't violated, and things like exceptions aren't thrown.
This style of test simply makes sure that, as you make changes in the future, your factory behaviour won't change without you knowing.
If your language supports it, for your dependencies, you can use reflection. This isn't always the easiest to maintain, and couples your tests very tightly to your implementation. You have to decide if that's acceptable. This approach tends to be very brittle.
But you really seem to be trying to separate which classes are constructed, from how the constructors are called. You might just be better off with using a DI framework to get that kind of flexibility.
By new-ing up all your types as you need them, you don't give yourself many seams (a seam is a place where you can alter behaviour in your program without editing in that place) to work with.
With the example as you give it though, you could derive a class from the factory. Then override / mock CreateADependency(), CreateAnotherDependency() and CreateAThirdDependency(). Now when you call CreateSomeClassWithDependencies(), you are able to sense whether or not the correct dependencies were created.
Note: the definition of "seam" comes from Michael Feather's book, "Working Effectively with Legacy Code". It contains examples of many techniques to add testability to untested code. You may find it very useful.
What we do is create the dependancies with factories, and we use a dependancy injection framework to substitute mock factories for the real ones when the test is run. Then we set up the appropriate expectations on those mock factories.
You can always check stuff with reflection. There is no need to expose something just for unit tests. I find it quite rare that I need to reach in with reflection and it may be a sign of bad design.
Looking at your sample code, yes the Assert not null seems redundant, depending on the way you designed your factory, some will return null objects from the factory as opposed to exceptioning out.
As I understand it you want to test that the dependencies are built correctly and passed to the new instance?
If I was not able to use a framework like google guice, I would probably do it something like this (here using JMock and Hamcrest):
#Test
public void CreateSomeClassWithDependencies()
{
dependencyFactory = context.mock(DependencyFactory.class);
classAFactory = context.mock(ClassAFactory.class);
myDependency0 = context.mock(MyDependency0.class);
myDependency1 = context.mock(MyDependency1.class);
myDependency2 = context.mock(MyDependency2.class);
myClassA = context.mock(ClassA.class);
context.checking(new Expectations(){{
oneOf(dependencyFactory).createDependency0(); will(returnValue(myDependency0));
oneOf(dependencyFactory).createDependency1(); will(returnValue(myDependency1));
oneOf(dependencyFactory).createDependency2(); will(returnValue(myDependency2));
oneOf(classAFactory).createClassA(myDependency0, myDependency1, myDependency2);
will(returnValue(myClassA));
}});
builder = new ClassABuilder(dependencyFactory, classAFactory);
assertThat(builder.make(), equalTo(myClassA));
}
(if you cannot mock ClassA you can assign a non-mock version to myClassA using new)

What is the unit testing strategy for method call forwarding?

I have the following scenario:
public class CarManager
{
..
public long AddCar(Car car)
{
try
{
string username = _authorizationManager.GetUsername();
...
long id = _carAccessor.AddCar(username, car.Id, car.Name, ....);
if(id == 0)
{
throw new Exception("Car was not added");
}
return id;
} catch (Exception ex) {
throw new AddCarException(ex);
}
}
public List AddCars(List cars)
{
List ids = new List();
foreach(Car car in cars)
{
ids.Add(AddCar(car));
}
return ids;
}
}
I am mocking out _reportAccessor, _authorizationManager etc.
Now I want to unittest the CarManager class.
Should I have multiple tests for AddCar() such as
AddCarTest()
AddCarTestAuthorizationManagerException()
AddCarTestCarAccessorNoId()
AddCarTestCarAccessorException()
For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()? < p/>
Please help.
There are two issues here:
Unit tests should do more than test methods one at a time. They should be designed to prove that your class can do the job it was designed for when integrated with the rest of the system. So you should mock out the dependencies and then write a test for each way in which you class will actually be used. For each (non-trivial) class you write there will be scenarios that involve the client code calling methods in a particular pattern.
There is nothing wrong with AddCars calling AddCar. You should repeat tests for error handling but only when it serves a purpose. One of the unofficial rules of unit testing is 'test to the point of boredom' or (as I like to think of it) 'test till the fear goes away'. Otherwise you would be writing tests forever. So if you are confident a test will add no value them don't write it. You may be wrong of course, in which case you can come back later and add it in. You don't have to produce a perfect test first time round, just a firm basis on which you can build as you better understand what your class needs to do.
Unit Test should focus only to its corresponding class under testing. All attributes of class that are not of same type should be mocked.
Suppose you have a class (CarRegistry) that uses some kind of data access object (for example CarPlatesDAO) which loads/stores car plate numbers from Relational database.
When you are testing the CarRegistry you should not care about if CarPlateDAO performs correctly; Since our DAO has it's own unit test.
You just create mock that behaves like DAO and returns correct or wrong values according to expected behavior. You plug this mock DAO to your CarRegistry and test only the target class without caring if all aggregated classes are "green".
Mocking allows separation of testable classes and better focus on specific functionality.
When unittesting the AddCar class, create tests that will exercise every codepath. If _authorizationManager.GetUsername() can throw an exception, create a test where your mock for this object will throw. BTW: don't throw or catch instances of Exception, but derive a meaningful Exception class.
For the AddCars method, you definitely should call AddCar. But you might consider making AddCar virtual and override it just to test that it's called with all cars in the list.
Sometimes you'll have to change the class design for testability.
Should I have multiple tests for
AddCar() such as
AddCarTest()
AddCarTestAuthorizationManagerException()
AddCarTestCarAccessorNoId()
AddCarTestCarAccessorException()
Absolutely! This tells you valuable information
For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems
like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()?
Calling AddCar from AddCars is a great idea, it avoids violating the DRY principle. Similarly, you should be repeating tests. Think of it this way - you already wrote tests for AddCar, so when testing AddCards you can assume AddCar does what it says on the tin.
Let's put it this way - imagine AddCar was in a different class. You would have no knowledge of an authorisation manager. Test AddCars without the knowledge of what AddCar has to do.
For AddCars, you need to test all normal boundary conditions (does an empty list work, etc.) You probably don't need to test the situation where AddCar throws an exception, as you're not attempting to catch it in AddCars.
Writing tests that explore every possible scenario within a method is good practice. That's how I unit test in my projects. Tests like AddCarTestAuthorizationManagerException(), AddCarTestCarAccessorNoId(), or AddCarTestCarAccessorException() get you thinking about all the different ways your code can fail which has led to me find new kinds of failures for a method I might have otherwise missed as well as improve the overall design of the class.
In a situation like AddCars() calling AddCar() I would mock the AddCar() method and count the number of times it's called by AddCars(). The mocking library I use allows me to create a mock of CarManager and mock only the AddCar() method but not AddCars(). Then your unit test can set how many times it expects AddCar() to be called which you would know from the size of the list of cars passed in.

Testing a function that throws on failure

What is the best way of testing a function that throws on failure? Or testing a function that is fairly immune to failure?
For instance; I have a I/O Completion Port class that throws in the constructor if it can't initialise the port correctly. This uses the Win32 function of CreateIoCompletionPort in the initialiser list. If the handle isn't set correctly - a non-null value - then the constructor will throw an exception. I have never seen this function fail.
I am pretty certain that this (and other functions like it in my code) if they fail will behave correctly, the code is 50 lines long including white-space, so my questions are
a) is it worth testing that it will throw
b) and if it is worth testing, how to?
c) should simple wrapper classes as these be unit-tested?
For b) I thought about overriding CreateIoCompletionPort and passing the values through. In the unit test override it and cause it to return 0 when a certain value is passed in. However since this is used in the constructor then this needs to be static. Does this seem valid or not?
If you are doing this in .NET, there is an ExpectedException attribute that you can add to your test:
[Test, ExpectedException(typeof(SpecificException), "Exception's specific message")]
public void TestWhichHasException()
{
CallMethodThatThrowsSpecificException();
}
Test will pass if the exception of that type and with the specified message is thrown. The attribute has other overloads including having InnerExceptions, etc.
It is definitely worthwhile to test failure conditions, both that your class properly throws an exception when you want it to and that exceptions are handled properly in the class.
This can easily be done if you are acting on an object passed in to the constructor... just pass in a mock. If not, I tend to prefer to have the functionality moved to a protected method, and override the protected method to evoke my failure case. I will use Java as an example, but it should be easy enough to port the ideas to a C# case:
public class MyClass {
public MyClass() throws MyClassException {
// Whatever, including a call to invokeCreateIoCompletionPort
}
protected int invokeCreateIoCompletionPort(String str, int i) {
return StaticClass.createIoCompletionPort(str, i);
}
}
public class MyTest {
public void myTest() {
try {
new MyClass();
fail("MyClassException was not thrown!");
} catch (MyClassException e) {
}
}
private static class MyClassWrapper extends MyClass {
#Override
protected int invokeCreateIoCompletionPort(String str, int i) {
throw new ExpectedException();
}
}
}
As you can see, it is pretty easy to test whether an exception is being thrown by the constructor or method you are testing, and it is also pretty easy to inject an exception from an external class that can throw an exception. Sorry I'm not using your actual method, I just used the name to illustrate how it sounded like you are using it, and how I would test the cases it sounded you wanted to test.
Basically, any API details you expose can usually be tested, and if you want to KNOW that exceptional cases work as they should, you probably will want to test it.
You should consider writing your code in such a way that you can mock your I/O completion port. Make an interface/abstract class that exposes the methods you need on the I/O object, and write and test implementation that does things like it's supposed to (and an option to simulate failure perhaps).
AFAIK it's a common practice to mock external resources when unit testing, to minimize dependencies.
Sound like C++ to me. You need a seam to mock out the Win32 functions. E.g. in your class you would create a protected method CreateIoCompletionPort() which calls ::CreateIoCompletionPort() and for your test you create a class that derives from you I/O Completion Port class and overrides CreateIoCompletionPort() to do nothing but return NULL. Your production class is still behaving like it was designed but you are now able to simulate a failure in the CreateIoCompletionPort() function.
This technique is from Michael Feathers book "Working effectively with legacy code".