tdd - should I mock here or use real implementation - unit-testing

I'm writing program arguments parser, just to get better in TDD, and I stuck with the following problem. Say I have my parser defined as follows:
class ArgumentsParser {
public ArgumentsParser(ArgumentsConfiguration configuration) {
this.configuration = configuration;
}
public void parse(String[] programArguments) {
// all the stuff for parsing
}
}
and I imagine to have ArgumentsConfiguration implementation like:
class ArgumentsConfiguration {
private Map<String, Class> map = new HashMap<String, Class>();
public void addArgument(String argName, Class valueClass) {
map.add(argName, valueClass);
}
// get configured arguments methods etc.
}
This is my current stage. For now in test I use:
#Test
public void shouldResultWithOneAvailableArgument() {
ArgumentsConfiguration config = prepareSampleConfiguration();
config.addArgument("mode", Integer.class);
ArgumentsParser parser = new ArgumentsParser(configuration);
parser.parse();
// ....
}
My question is if such way is correct? I mean, is it ok to use real ArgumentsConfiguration in tests? Or should I mock it out? Default (current) implementation is quite simple (just wrapped Map), but I imagine it can be more complicated like fetching configuration from kind of datasource. Then it'd be natural to mock such "expensive" behaviour. But what is preferred way here?
EDIT:
Maybe more clearly: should I mock ArgumentsConfiguration even without writing any implementation (just define its public methods), use mock for testing and deal with real implementation(s) later, or should I use the simplest one in tests, and let them cover this implementation indirectly. But if so, what about testing another Configuration implementation provided later?

Then it'd be natural to mock such "expensive" behaviour.
That's not the point. You're not mocking complex classes.
You're mocking to isolate classes completely.
Complete isolation assures that the tests demonstrate that classes follow their interface and don't have hidden implementation quirks.
Also, complete isolation makes debugging a failed test much, much easier. It's either the test, the class under test or the mocked objects. Ideally, the test and mocks are so simple they don't need any debugging, leaving just the class under test.

The correct answer is that you should mock anything that you're not trying to test directly (e.g.: any dependencies that the object under test has that do not pertain directly to the specific test case).

In this case, because your ArgumentsConfiguration is so simple, I'd recommend using the real implementation until your requirements demand something more complicated. There doesn't seem to be any logic in your ArgumentsConfiguration class, so it's safe to use the real object. If the time comes where the configuration is more complicated, then an approach you should probably take would be not to create a configuration that talks to some data source, but instead generate the ArgumentsConfiguration object from that datasource. Then you could have a test that makes sure it generates the configuration properly from the datasource and you don't need unnecessary abstractions.

Related

Is it bad practice to unit test a method that is calling another method I am already testing?

Consider you have the following method:
public Foo ParseMe(string filepath)
{
// break up filename
// validate filename & extension
// retrieve info from file if it's a certain type
// some other general things you could do, etc
var myInfo = GetFooInfo(filename);
// create new object based on this data returned AND data in this method
}
Currently I have unit tests for GetFooInfo, but I think I also need to build unit tests for ParseMe. In a situation like this where you have a two methods that return two different properties - and a change in either of them could break something - should unit tests be created for both to determine the output is as expected?
I like to err on the side of caution and be more wary about things breaking and ensuring that maintenance later on down the road is easier, but I feel very skeptical about adding very similar tests in the test project. Would this be bad practice or is there any way to do this more efficiently?
I'm marking this as language agnostic, but just in case it matters I am using C# and NUnit - Also, I saw a post similar to this in title only, but the question is different. Sorry if this has already been asked.
ParseMe looks sufficiently non-trivial to require a unit test. To answer your precise question, if "you have a two methods that return two different properties - and a change in either of them could break something" you should absolutely unit test them.
Even if the bulk of the work is in GetFooInfo, at minimum you should test that it's actually called. I know nothing about NUnit, but I know in other frameworks (like RSpec) you can write tests like GetFooInfo.should be_called(:once).
It is not a bad practice to test a method that is calling another method. In fact, it is a good practice. If you have a method calling another method, it is probably performing additional functionality, which should be tested.
If you find yourself unit testing a method that calls a method that is also being unit tested, then you are probably experiencing code reuse, which is a good thing.
I agree with #tsm - absolutely test both methods (assuming both are public).
This may be a smell that the method or class is doing too much - violating the Single Responsibility Principle. Consider doing an Extract Class refactoring and decoupling the two classes (possibly with Dependency Injection). That way you could test both pieces of functionality independently. (That said, I'd only do that if the functionality was sufficiently complex to warrant it. It's a judgment call.)
Here's an example in C#:
public interface IFooFileInfoProvider
{
FooInfo GetFooInfo(string filename);
}
public class Parser
{
private readonly IFooFileInfoProvider _fooFileInfoProvider;
public Parser(IFooFileInfoProvider fooFileInfoProvider)
{
// Add a null check
_fooFileInfoProvider = fooFileInfoProvider;
}
public Foo ParseMe(string filepath)
{
string filename = Path.GetFileName(filepath);
var myInfo = _fooFileInfoProvider.GetFooInfo(filename);
return new Foo(myInfo);
}
}
public class FooFileInfoProvider : IFooFileInfoProvider
{
public FooInfo GetFooInfo(string filename)
{
// Do I/O
return new FooInfo(); // parameters...
}
}
Many developers, me included, take a programming by contract approach. That requires you to consider each method as a black box. If the method delegates to another method to accomplish its task does not matter, when you are testing the method. But you should also test all large or complicated parts of your program as units. So whether you need to unit test the GetFooInfo depends on how complicated that method is.

How can I refactor and unit test complex legacy Java EE5 EJB methods?

My colleagues and I are currently introducing unit tests to our legacy Java EE5 codebase. We use mostly JUnit and Mockito. In the process of writing tests, we have noticed that several methods in our EJBs were hard to test because they did a lot of things at once.
I'm fairly new to the whole testing business, and so I'm looking for insight in how to better structure the code or the tests. My goal is to write good tests without a headache.
This is an example of one of our methods and its logical steps in a service that manages a message queue:
consumeMessages
acknowledgePreviouslyDownloadedMessages
getNewUnreadMessages
addExtraMessages (depending on somewhat complex conditions)
markMessagesAsDownloaded
serializeMessageObjects
The top-level method is currently exposed in the interface, while all sub-methods are private. As far as I understand it, it would be bad practice to just start testing private methods, as only the public interface should matter.
My first reaction was to just make all the sub-methods public and test them in isolation, then in the top-level method just make sure that it calls the sub-methods. But then a colleague mentioned that it might not be a good idea to expose all those low-level methods at the same level as the other one, as it might cause confusion and other developers might start using when they should be using the top-level one. I can't fault his argument.
So here I am.
How do you reconcile exposing easily testable low-level methods versus avoiding to clutter the interfaces? In our case, the EJB interfaces.
I've read in other unit test questions that one should use dependency injection or follow the single responsibility principle, but I'm having trouble applying it in practice. Would anyone have pointers on how to apply that kind of pattern to the example method above?
Would you recommend other general OO patterns or Java EE patterns?
At first glance, I would say that we probably need to introduce a new class, which would 1) expose public methods that can be unit tested but 2) not be exposed in the public interface of your API.
As an example, let's imagine that you are designing an API for a car. To implement the API, you will need an engine (with complex behavior). You want to fully test your engine, but you don't want to expose details to the clients of the car API (all I know about my car is how to push the start button and how to switch the radio channel).
In that case, what I would do is something like that:
public class Engine {
public void doActionOnEngine() {}
public void doOtherActionOnEngine() {}
}
public class Car {
private Engine engine;
// the setter is used for dependency injection
public void setEngine(Engine engine) {
this.engine = engine;
}
// notice that there is no getter for engine
public void doActionOnCar() {
engine.doActionOnEngine();
}
public void doOtherActionOnCar() {
engine.doActionOnEngine();
engine.doOtherActionOnEngine(),
}
}
For the people using the Car API, there is no way to access the engine directly, so there is no risk to do harm. On the other hand, it is possible to fully unit test the engine.
Dependency Injection (DI) and Single Responsibility Principle (SRP) are highly related.
SRP is basicly stating that each class should only do one thing and delegate all other matters to separate classes. For instance, your serializeMessageObjects method should be extracted into its own class -- let's call it MessageObjectSerializer.
DI means injecting (passing) the MessageObjectSerializer object as an argument to your MessageQueue object -- either in the constructor or in the call to the consumeMessages method. You can use DI frameworks to do this for, but I recommend to do it manually, to get the concept.
Now, if you create an interface for the MessageObjectSerializer, you can pass that to the MessageQueue, and then you get the full value of the pattern, as you can create mocks/stubs for easy testing. Suddenly, consumeMessages doesn't have to pay attention to how serializeMessageObjects behaves.
Below, I have tried to illustrate the pattern. Note, that when you want to test consumeMessages, you don't have to use the the MessageObjectSerializer object. You can make a mock or stub, that does exactly what you want it to do, and pass it instead of the concrete class. This really makes testing so much easier. Please, forgive syntax errors. I did not have access to Visual Studio, so it is written in a text editor.
// THE MAIN CLASS
public class MyMessageQueue()
{
IMessageObjectSerializer _serializer;
//Constructor that takes the gets the serialization logic injected
public MyMessageQueue(IMessageObjectSerializer serializer)
{
_serializer = serializer;
//Also a lot of other injection
}
//Your main method. Now it calls an external object to serialize
public void consumeMessages()
{
//Do all the other stuff
_serializer.serializeMessageObjects()
}
}
//THE SERIALIZER CLASS
Public class MessageObjectSerializer : IMessageObjectSerializer
{
public List<MessageObject> serializeMessageObjects()
{
//DO THE SERILIZATION LOGIC HERE
}
}
//THE INTERFACE FOR THE SERIALIZER
Public interface MessageObjectSerializer
{
List<MessageObject> serializeMessageObjects();
}
EDIT: Sorry, my example is in C#. I hope you can use it anyway :-)
Well, as you have noticed, it's very hard to unit test a concrete, high-level program. You have also identified the two most common issues:
Usually the program is configured to use specific resources, such as a specific file, IP address, hostname etc. To counter this, you need to refactor the program to use dependency injection. This is usually done by adding parameters to the constructor that replace the ahrdcoded values.
It's also very hard to test large classes and methods. This is usually due to the combinatorical explosion in the number of tests required to test a complex piece of logic. To counter this, you will usually refactor first to get lots more (but shorter) methods, then trying to make the code more generic and testable by extracting several classes from your original class that each have a single entry method (public) and several utility methods (private). This is essentially the single responsibility principle.
Now you can start working your way "up" by testing the new classes. This will be a lot easier, as the combinatoricals are much easier to handle at this point.
At some point along the way you will probably find that you can simplify your code greatly by using these design patterns: Command, Composite, Adaptor, Factory, Builder and Facade. These are the most common patterns that cut down on clutter.
Some parts of the old program will probably be largely untestable, either because they are just too crufty, or because it's not worth the trouble. Here you can settle for a simple test that just checks that the output from known input has not changed. Essentially a regression test.

Writing maintainable unit tests with mock objects

This is a simplified version of a class I'm writing a unit test for
class SomeClass {
void methodA() {
methodB();
methodC();
methodD();
}
void methodB() {
//does something
}
void methodC() {
//does something
}
void methodD() {
//does something
}
}
While writing the unit tests for this class, I've mocked out objects using EasyMock used in each method. It was easy to set up the mock objects and their expectation
In method B,C,and D. But to test method A, I have to set up A LOT more mock objects and their expectations. Also, I’m testing method A in different conditions, meaning I have to setup the mock objects many times with different expectations.
In the end, my unit test becomes hard to maintain and pretty cluttered. I was wondering if anyone has or seen a good solution to this problem.
If I understand your question correctly, I think that this is a matter of design. The nice thing about unit testing is that writing tests often forces you to make your design better. If you need to mock too many things while testing a method it often means you should split your class into two smaller classes, which will be easier to test (and write, and maintain, and bugfix, and reuse, etc.).
In your case, the method A seems to be at a higher level than methods A, B, C. You can consider removing it to a higher level class, that would wrap SomeClass:
class HigherLevelClass {
ISomeClass someClass;
public HigherLevelClass(ISomeClass someClass)
{
this.someClass = someClass;
}
void methodA() {
someClass.methodB();
someClass.methodC();
someClass.methodD();
}
}
class SomeClass : ISomeClass {
void methodB() {
//does something
}
void methodC() {
//does something
}
void methodD() {
//does something
}
}
Now when you are testing methodA all you need to mock is the small ISomeClass interface and the three method calls.
You could extract common setup code into separate (possibly parametrized) methods, then call them whenever appropriate. If the tests for methodA have a very different fixture from the tests of the other methods, there may not be much to put into the #Before method itself, so you need to call the appropriate combination of setup helper methods from the test methods themselves. It is still a bit cumbersome, but better than duplicating code all over the place.
Depending on what unit test framework you use, there may be other options too, but the above should work with any framework.
This is an example of a Fragile test because the mock setups have too intimate knowledge of the SUT.
I don't know EasyMock, but with Moq you don't need to setup void methods. However, with Moq the methods would have to be public or protected and virtual.
For each test you're writing, consider the behaviour which is valuable for that test. You'll have some contexts you're setting up which the behaviour relies on, and some outcomes as a result of the behaviour that you want to verify.
Set up relevant contexts, verify the outcomes, and use NiceMocks for everything else.
I prefer Mockito (Java) or Moq (.NET) which work this way by default. Here's Mockito's page on Mockito vs. EasyMock so you can get the idea (EasyMock didn't have NiceMock before Mockito came along):
http://code.google.com/p/mockito/wiki/MockitoVSEasyMock
You can probably use EasyMock's NiceMock in a similar way. Hopefully this will help you detangle your tests. You can always import both frameworks and use them alongside each other / incrementally switch over if it helps.
Good luck!
I’m testing method A in different conditions, meaning I have to setup the mock objects many times with different expectations.
If you care of what methodA is doing and which collaborator function has to be called then you have to setup different expectations... I don't see how you can skip this step?!
If you testLogout you would expect a call to myCollaborator.logout() otherwise if you testLogin you would expect something like myCollaborator.login().
If you have many methods with lots/different expectations maybe is the case to split your class in collaborators

Do I need to write a unit test for a method within service class that only calls a method within repository class?

Example
I have a repository class (DAL):
public class MyRepository : IMyRepository
{
public void Delete(int itemId)
{
// creates a concrete EF context class
// deletes the object by calling context.DeleteObject()
}
// other methods
}
I also have a service class (BLL):
public class MyService
{
private IMyRepository localRepository;
public MyService(IMyRepository instance)
{
this.localRepository = instance;
}
public void Delete(int itemId)
{
instance.Delete(itemId);
}
// other methods
}
Creating a unit test for MyRepository would take much more time than implementing it, because I would have to mock Entity Framework context.
But creating a unit test for MyService seems nonsense, because it only calls into Repository. All I could check is to verify if it did actually call repository Delete method.
Question
How would you suggest to unit test these pair of Delete methods. Both? One? None? And what would you test?
Yes, I would definitely write a unit test for the Service Layer. The reason for this is because, you're not just testing that your implementation works now, but you're also testing that it will continue to work in the future.
This is a vital concept to understand. If someone comes along later on and changes your ServiceLayer, and there's no unit test, how can you verify that the functionality continues to work?
I would also write tests for your DAL, but I would put those in a separate assembly called DataTests or something. The purpose here is to isolate your concerns across assemblies. Unit Tests shouldn't be concerned with your DAL, really.
Yes, both.
IMyRepository mock = ...;
// create Delete(int) expectation
MyService service = new MyService(mock);
service.Delete(100);
// Verify expectations
Your Delete method right now might only call the Delete method on the repository, but that doesn't mean it always will. You want to have unit tests for this partly to verify it behaves correctly and partly as way of defining your specifications of how the repository is to work.
You also aught to have a test that verifies that the constructor will throw an exception if the repository is null. You might also have other validation to do here in this method such as non-negative ID's, or non-zero id. Maybe that doesn't happen here, make it part of the specifications by creating tests that verify the expected behaviors.
They seem trivial but I can all but guarantee it will change one day and your expectation and specifications may not be verified.
Create the test for the Service. Currently all it does is to call into the Repository Delete method; however, you shouldn't care about that. What if later something happens and the functionality becomes much more complicated? Don't you want to have unit test code that will assure you that the functionality is still working as expected?
If you're exposing your Delete through your Service, you're expecting it to have an effect. Write a Unit Test to test that effect. Depending on your particular needs, I'd say you might not need to have a test on the Repository Delete, particularly if that functionality is getting exercised as part of your Service Delete functionality, but it really all depends on what level of coverage you're trying for.
Also, if you had created this code with TDD, you would have had a test. It actually matters whether people can call Delete through your service, so you actually have to test it.
In my opinion you need to test both. Maybe you can do the creation EF context class in a seperate factory that can be tested more easy and mock the context class for the MyRepository tests. That will be more easy and using a factory for creating a context calls seems to be quiet useful for me.

Unit testing factory methods which have a concrete class as a return type

So I have a factory class and I'm trying to work out what the unit tests should do. From this question I could verify that the interface returned is of a particular concrete type that I would expect.
What should I check for if the factory is returning concrete types (because there is no need - at the moment - for interfaces to be used)? Currently I'm doing something like the following:
[Test]
public void CreateSomeClassWithDependencies()
{
// m_factory is instantiated in the SetUp method
var someClass = m_factory.CreateSomeClassWithDependencies();
Assert.IsNotNull(someClass);
}
The problem with this is that the Assert.IsNotNull seems somewhat redundant.
Also, my factory method might be setting up the dependencies of that particular class like so:
public SomeClass CreateSomeClassWithDependencies()
{
return new SomeClass(CreateADependency(), CreateAnotherDependency(),
CreateAThirdDependency());
}
And I want to make sure that my factory method sets up all these dependencies correctly. Is there no other way to do this then to make those dependencies public/internal properties which I then check for in the unit test? (I'm not a big fan of modifying the test subjects to suit the testing)
Edit: In response to Robert Harvey's question, I'm using NUnit as my unit testing framework (but I wouldn't have thought that it would make too much of a difference)
Often, there's nothing wrong with creating public properties that can be used for state-based testing. Yes: It's code you created to enable a test scenario, but does it hurt your API? Is it conceivable that other clients would find the same property useful later on?
There's a fine line between test-specific code and Test-Driven Design. We shouldn't introduce code that has no other potential than to satisfy a testing requirement, but it's quite alright to introduce new code that follow generally accepted design principles. We let the testing drive our design - that's why we call it TDD :)
Adding one or more properties to a class to give the user a better possibility of inspecting that class is, in my opinion, often a reasonable thing to do, so I don't think you should dismiss introducing such properties.
Apart from that, I second nader's answer :)
If the factory is returning concrete types, and you're guaranteeing that your factory always returns a concrete type, and not null, then no, there isn't too much value in the test. It does allows you to make sure, over time that this expectation isn't violated, and things like exceptions aren't thrown.
This style of test simply makes sure that, as you make changes in the future, your factory behaviour won't change without you knowing.
If your language supports it, for your dependencies, you can use reflection. This isn't always the easiest to maintain, and couples your tests very tightly to your implementation. You have to decide if that's acceptable. This approach tends to be very brittle.
But you really seem to be trying to separate which classes are constructed, from how the constructors are called. You might just be better off with using a DI framework to get that kind of flexibility.
By new-ing up all your types as you need them, you don't give yourself many seams (a seam is a place where you can alter behaviour in your program without editing in that place) to work with.
With the example as you give it though, you could derive a class from the factory. Then override / mock CreateADependency(), CreateAnotherDependency() and CreateAThirdDependency(). Now when you call CreateSomeClassWithDependencies(), you are able to sense whether or not the correct dependencies were created.
Note: the definition of "seam" comes from Michael Feather's book, "Working Effectively with Legacy Code". It contains examples of many techniques to add testability to untested code. You may find it very useful.
What we do is create the dependancies with factories, and we use a dependancy injection framework to substitute mock factories for the real ones when the test is run. Then we set up the appropriate expectations on those mock factories.
You can always check stuff with reflection. There is no need to expose something just for unit tests. I find it quite rare that I need to reach in with reflection and it may be a sign of bad design.
Looking at your sample code, yes the Assert not null seems redundant, depending on the way you designed your factory, some will return null objects from the factory as opposed to exceptioning out.
As I understand it you want to test that the dependencies are built correctly and passed to the new instance?
If I was not able to use a framework like google guice, I would probably do it something like this (here using JMock and Hamcrest):
#Test
public void CreateSomeClassWithDependencies()
{
dependencyFactory = context.mock(DependencyFactory.class);
classAFactory = context.mock(ClassAFactory.class);
myDependency0 = context.mock(MyDependency0.class);
myDependency1 = context.mock(MyDependency1.class);
myDependency2 = context.mock(MyDependency2.class);
myClassA = context.mock(ClassA.class);
context.checking(new Expectations(){{
oneOf(dependencyFactory).createDependency0(); will(returnValue(myDependency0));
oneOf(dependencyFactory).createDependency1(); will(returnValue(myDependency1));
oneOf(dependencyFactory).createDependency2(); will(returnValue(myDependency2));
oneOf(classAFactory).createClassA(myDependency0, myDependency1, myDependency2);
will(returnValue(myClassA));
}});
builder = new ClassABuilder(dependencyFactory, classAFactory);
assertThat(builder.make(), equalTo(myClassA));
}
(if you cannot mock ClassA you can assign a non-mock version to myClassA using new)