I have an Order entity with a refund() method that uses an abstract factory refundStrategyFactory to create the appropriate refund strategy at run-time:
public class Order extends Entity{
public void refund(Employee emp, Quantity qty, IRefundStrategyFactory refundStartegyFactory){
//use the abstract factory to determine the appropriate strategy at run-time.
}
}
I'm trying to use dependency injection and have IRefundStrategyFactory injected automatically into the refund() method, but I couldn't find a way to achieve this.
I'm not using the constructor injection because Order is an entity and we shouldn't inject services into entities unless it's going to be used temporarily as in the refund method. This is better explained by Misko Hevery's blog post To “new” or not to “new”…:
It is OK for Newable to know about Injectable. What is not OK is for
the Newable to have a field reference to Injectable.
A similar answer by Mark Seemann to a related question also advocates the same idea.
Bonus point, Aren't I exposing the internals of the refund() method by exposing its dependency on IRefundStrategyFactory? Should I sacrifice unit testing in isolation and create the factory inside the method itself?
Well ideally your domain model should be free infrastructure concerns and IOC is a infrastructure concern, it is recommended that the domain model be simple POJO's. So I would not inject beans into my domain model.
In this case i think you should have a application service which gets injected the factory class and it then just passes the factory class as a parameter to your Order class.
Does the factory class use some information from the Order class to decide which kind of strategy class to instantiate ?
Bonus point, Aren't I exposing the internals of the refund() method by
exposing its dependency on IRefundStrategyFactory?
I am not sure if there is such a thing as exposing the internals of a method, This concept fits more naturally with "exposing internals of a class", A method does some action and to do so it might need some information from the outside world, we could debate on how to pass in that information and whether that one method is doing too much or not ? ... But i cannot reason on whether a method "exposes its implementation", maybe you should elaborate on what you meant by that point
I'd solve this by starting a refund process with eventual consistency, where you trigger a Refund action. The listeners to this action and it's events would then do their own tasks, i.e. refund the cost of the goods, restock the items, etc. Alternatively you could just do this atomically in an application service that triggers several domain services, one after another, passing in the Order item:
FinancialService.RefundOrderGoods(myOrder);
StockService.RestockOrderItems(myOrder);
...
I'd avoid adding any services or repositories into your entities.
Your main problem is that you can't "injected automatically into the refund() method", but IMO that isn't a problem at all, since you can simply pass on the dependency onto the method but still use constructor injection in the service that calls this method:
public class ApplyRefundHandler : ICommandHandler<ApplyRefund>
{
private readonly IRefundStrategyFactory refundStartegyFactory;
private readonly IRepository<Order> orderRepository;
private readonly IRepository<Employee> employeeRepository;
public ApplyRefundHandler(IRefundStrategyFactory refundStartegyFactory,
IRepository<Order> orderRepository,
IRepository<Employee> employeeRepository)
{
this.refundStartegyFactory = refundStartegyFactory;
this.orderRepository = orderRepository;
this.employeeRepository = employeeRepository;
}
public void Handle(ApplyRefund command)
{
Order order = this.orderRepository.GetById(command.OrderId);
Employee employee = this.employeeRepository.GetById(command.EmployeeId);
order.refund(employee, command.Quantity, this.refundStartegyFactory);
}
}
Aren't I exposing the internals of the refund() method by exposing its
dependency on IRefundStrategyFactory? Should I sacrifice unit testing
in isolation and create the factory inside the method itself?
Yes you are, but in the same time you are probably violating the Single Responsibility Principle by adding a lot of business logic in the Order class. If you hide the abstractions, it means you need to inject them into the constructor and you will probably find out quickly that your Order class gets a lot of dependencies. So instead you can view the Order's methods as a Single Responsibility (a class in disguise) that has little to no relationships with the other methods in that class, and in that case it makes sense to pass in its dependencies into that method.
As a side node, please be aware that factory abstractions, like your IRefundStrategyFactory are usually not good abstractions. You should consider removing it, as described in detail here.
Related
When I find myself wanting to test the private functions of a class that has a small public API and a complex internal call structure I seem to end up choosing from the two following approaches:
If the class has functionality that is not reliant on the class'
state and would offer useful functionality to other potential client
code then I should break it out into a service and test it's public
API.
If the class has functionality that is reliant on class' state and
would be tightly coupled if broken out then I should test it through
the public API by passing the correct parameters and then name the
test so that it references the private function I am targeting.
I feel that testing private functions directly makes classes less easy to refactor and tests more brittle but testing private functions through the public API and binding them just by name and correct parameter values also feels a bit shoddy.
Is there a set of rules to abide by in these situations short of doing proper TDD? I have no choice as I am writing tests in retrospect.
You don't care about private methods being tested. You only care about the public API being tested. For a private method, all that matters about it is how it affects the visible behaviour of the object.
If you have some behaviour of your public API which is implemented as a private method, then that test will likely 'target' the private method through the arguments to the API method. That's not a BadThing per-se - it covers a genuine test case for a real behaviour of the public API. You might, at a later stage, decide to refactor your class in such a way that the same behaviour is implemented in some other fashion. What's important is that the test encapsulates the behaviour of the public API.
It may then happen that you extract the private method into its own class and the method becomes a part of the public API of the new class. That's fine too. Your new class becomes a dependency of the old one, and you can then use DI to decouple the intricacies of triggering the behaviour of the dependency from the actual test case. That too is good, provided the new class has a definite reason for existing beyond just servicing the first class.
It all boils down to what's the best thing for the code you're looking at - does it make sense to completely capture the behaviour of your original API without having to fiddle with mocked dependencies? In an ideal world this would probably always be the case, but as your API becomes larger or more complex or higher-level that can cause problems of its own.
The important thing to remember though is that your tests are never testing a private method. They're always testing a visible behaviour of the system under test, which may (or may not) be implemented in terms of one or more private methods.
I recently implemented some code similar to below and submitted a pull request to get it added to our shared repository. The request was rejected and I was told this pattern was bad. Instead I should be putting my dependencies closer to the creation of my objects. If the parent object creates a child object then a Func childFactory should be passed in rather than a more generic object since that could be misused, increases coupling and becomes much harder to test.
public interface ILogResolverService
{
ILogger<T> Resolve<T>();
}
public class LogResolverService : ILogResolverService
{
private readonly IContainer _container;
public LogResolverService(IContainer container)
{
_container = container;
}
public ILogger<T> Resolve<T>()
{
return _container.Resolve<ILogger<T>>();
}
}
The idea of the code is to pass an object that can create an ILogger so that you can log with the correct class name. If you create a child view model for example you would let it create it's own ILogger etc
Since this issue has polarized opinions amoungst my colleagues I wanted to get an opinion from the community.
Added After Comments
I should have added that I don't see this as a ServiceLocator pattern because it is a strongly typed interface passed into the constructor of a parent. So you know all your dependencies up front. The worse that can happen is a new method is added to the interface ILogResolverService. Unless I'm missing something.
For what I can tell with the design you've outlined above, the main issue here is with your service being somewhat container aware.
Even if the return values you're proving are strongly typed and unambiguous the tricky part is somewhere else. The domain knowledge is too broad and overlap the container job.
It's always a good practice to explicitly provide the dependency your builder is using via an Add method.
public AddLoggerService[...]
Depending of your context you can ask the container to decorate/compile such service by adding all the needed dependency runtime.
Hope i have shed some light on the matter,
Regards.
When we have a classes like
class IntoController(IViewModelCreator viewModelCreator) {}
and
class ProductController(ICommandFactory commandFactory, IViewModelCreator viewModelCreator) {}
and
class ProductController(ICommandFactory commandFactory, IViewModelCreator viewModelCreator, IRepository repository) {}
and a lot of more. It takes a lot of time to mock this interfaces each time. What do you think about general purpose class which contains a big set of mocks?
class BaseControllerUnitTests
{
protected Mock<IViewModelCreator> ViewModelCreator { get;set; }
protected Mock<ICommandFactory> ViewModelCreator { get;set; }
protected Mock<IRepository> ViewModelCreator { get;set; }
}
Thanks in advance.
I actually do this; I just keep them in a different class called TestDataFactory so I don't get into problems with inheritance (just in case I have to extend some other base class in a test).
The factory shouldn't be global/static (see below).
Pro:
There is a single place for all tests to go to get a valid object graph
If the object graph changes, there is just one place to go to fix all the tests
You can keep references to the mocks in the factory for mocking inner method calls (i.e. you can ask for a ProductController and later, when you ask for an ICommandFactory, you get the one which was injected into the controller).
Con:
The test factory will become quite big. Eventually, you'll have to split it into several files.
Not all tests need the exact same mockup. Sometimes, you'll need to insert a real object. My solution is to allow to override the references which the factory keeps. But it makes the code even more clumsy.
In addition to Aaron Digulla's answer I'd like to suggest my colleague's post with some examples. He calls it Test Context. I use this approach pretty much as well.
The fact that's you're writing test code shouldn't mean that all software engineering best practices should go out the window.
If you have a subset of common functionality between your tests (in this case - mocking some methods of the tested class) then yes, by all means - you can extract a base class.
I'm trying to get dependency inversion, or at least understand how to apply it, but the problem I have at the moment is how to deal with dependencies that are pervasive. The classic example of this is trace logging, but in my application I have many services that most if not all code will depend on (trace logging, string manipulation, user message logging etc).
None of the solutions to this would appear to be particularly palatable:
Using constructor dependency injection would mean that most of the constructors would have several, many, standard injected dependencies because most classes explicitly require those dependencies (they are not just passing them down to objects that they construct).
Service locator pattern just drives the dependencies underground, removing them from the constructor but hiding them so that it's not even explicit that the dependencies are required
Singleton services are, well, Singletons, and also serve to hide the dependencies
Lumping all those common services together into a single CommonServices interface and injecting that aswell a) violates the Law of Demeter and b) is really just another name for a Service Locator, albeit a specific rather than a generic one.
Does anyone have any other suggestions for how to structure these kinds of dependencies, or indeed any experience of any of the above solutions?
Note that I don't have a particular DI framework in mind, in fact we're programming in C++ and would be doing any injection manually (if indeed dependencies are injected).
Service locator pattern just drives the dependencies underground,
Singleton services are, well, Singletons, and also serve to hide the
dependencies
This is a good observation. Hiding the dependencies doesn't remove them. Instead you should address the number of dependencies a class needs.
Using constructor dependency injection would mean that most of the
constructors would have several, many, standard injected dependencies
because most classes explicitly require those dependencies
If this is the case, you are probably violating the Single Responsibility Principle. In other words, those classes are probably too big and do too much. Since you are talking about logging and tracing, you should ask yourself if you aren't logging too much. But in general, logging and tracing are cross-cutting concerns and you should not have to add them to many classes in the system. If you correctly apply the SOLID principles, this problem goes away (as explained here).
The Dependency Inversion principle is part of the SOLID Principles and is an important principle for among other things, to promote testability and reuse of the higher-level algorithm.
Background:
As indicated on Uncle Bob's web page, Dependency Inversion is about depend on abstractions, not on concretions.
In practice, what happens is that some places where your class instantiates another class directly, need to be changed such that the implementation of the inner class can be specified by the caller.
For instance, if I have a Model class, I should not hard code it to use a specific database class. If I do that, I cannot use the Model class to use a different database implementation. This might be useful if you have a different database provider, or you may want to replace the database provider with a fake database for testing purposes.
Rather than the Model doing a "new" on the Database class, it will simply use an IDatabase interface that the Database class implements. The Model never refers to a concrete Database class. But then who instantiates the Database class? One solution is Constructor Injection (part of Dependency Injection). For this example, the Model class is given a new constructor that takes an IDatabase instance which it is to use, rather than instantiate one itself.
This solves the original problem of the Model no longer references the concrete Database class and uses the database through the IDatabase abstraction. But it introduces the problem mentioned in the Question, which is that it goes against Law of Demeter. That is, in this case, the caller of Model now has to know about IDatabase, when previously it did not. The Model is now exposing to its clients some detail about how it gets its job done.
Even if you were okay with this, there's another issue that seems to confuse a lot of people, including some trainers. There's as an assumption that any time a class, such as Model, instantiates another class concretely, then it's breaking the Dependency Inversion principle and therefore it is bad. But in practice, you can't follow these types of hard-and-fast rules. There are times when you need to use concrete classes. For instance, if you're going to throw an exception you have to "new it up" (eg. threw new BadArgumentException(...)). Or use classes from the base system such as strings, dictionaries, etc.
There's no simple rule that works in all cases. You have to understand what it is that you're trying to accomplish. If you're after testability, then the fact that the Model classes references the Database class directly is not itself a problem. The problem is the fact that the Model class has no other means of using another Database class. You solve this problem by implementing the Model class such that it uses IDatabase, and allows a client to specify an IDatabase implementation. If one is not specified by the client, the Model can then use a concrete implementation.
This is similar to the design of the many libraries, including C++ Standard Library. For instance, looking at the declaration std::set container:
template < class T, // set::key_type/value_type
class Compare = less<T>, // set::key_compare/value_compare
class Alloc = allocator<T> > // set::allocator_type
> class set;
You can see that it allows you to specify a comparer and an allocator, but most of the time, you take the default, especially the allocator. The STL has many such facets, especially in the IO library where detailed aspects of streaming can be augmented for localization, endianness, locales, etc.
In addition to testability, this allows the reuse of the higher-level algorithm with entirely different implementation of the classes that the algorithm internally uses.
And finally, back to the assertion I made previously with regard to scenarios where you would not want to invert the dependency. That is, there are times when you need to instantiate a concrete class, such as when instantiating the exception class, BadArgumentException. But, if you're after testability, you can also make the argument that you do, in fact, want to invert dependency of this as well. You may want to design the Model class such that all instantiations of exceptions are delegated to a class and invoked through an abstract interface. That way, code that tests the Model class can provide its own exception class whose usage the test can then monitor.
I've had colleagues give me examples where they abstract instantiation of even system calls, such as "getsystemtime" simply so they can test daylight savings and time-zone scenarios through their unit-testing.
Follow the YAGNI principle -- don't add abstractions simply because you think you might need it. If you're practicing test-first development, the right abstractions becomes apparent and only just enough abstraction is implemented to pass the test.
class Base {
public:
void doX() {
doA();
doB();
}
virtual void doA() {/*does A*/}
virtual void doB() {/*does B*/}
};
class LoggedBase public : Base {
public:
LoggedBase(Logger& logger) : l(logger) {}
virtual void doA() {l.log("start A"); Base::doA(); l.log("Stop A");}
virtual void doB() {l.log("start B"); Base::doB(); l.log("Stop B");}
private:
Logger& l;
};
Now you can create the LoggedBase using an abstract factory that knows about the logger. Nobody else has to know about the logger, nor do they need to know about LoggedBase.
class BaseFactory {
public:
virtual Base& makeBase() = 0;
};
class BaseFactoryImp public : BaseFactory {
public:
BaseFactoryImp(Logger& logger) : l(logger) {}
virtual Base& makeBase() {return *(new LoggedBase(l));}
};
The factory implementation is held in a global variable:
BaseFactory* baseFactory;
And is initialized to an instance of BaseFactoryImp by 'main' or some function close to main. Only that function knows about BaseFactoryImp and LoggedBase. Everyone else is blissfully ignorant of them all.
Lots of developers think that testing private methods is a bad idea. However, all examples I've found were based on the idea that private methods are private because calling them could break internal object's state. But that's not only reason to hide methods.
Let's consider Facade pattern. My class users need the 2 public methods. They would be too large. In my example, they need to load some complex structure from the database's BLOB, parse it, fill some temporary COM objects, run user's macro to validate and modify these objects, and serialize modified objects to XML. Quite large functionality for the single metod :-) Most of these actions are required for both public methods. So, I've created about 10 private methods, and 2 public methods do call them. Actually, my private methods should not necessarily be private; they'll not break the internal state of instance. But, when I don't wont to test private methods, I have the following problems:
Publishing them means complexity for users (they have a choice they don't need)
I cannot imagine TDD style for such a large public methods, when you're to write 500+ lines of code just to return something (even not real result).
Data for these methods is retrieved from database, and testing DB-related functionality is much more difficult.
When I'm testing private methods:
I don't publish details that would confuse users. Public interface includes 2 methods.
I can work in TDD style (write small methods step-by-step).
I can cover most of class's functionality using test data, without database connection.
Could somebody describe, what am I doing wrong? What design should I use to obtain the same bonuses and do not test private methods?
UPDATE: It seems to me I've extracted everything I was able to another classes. So, I cannot imagine what could I extract additionally. Loading from database is performed by ORM layer, parsing stream, serializing to XML, running macro - everything is done by standalone classes. This class contains quite complex data structure, routines to search and conversion, and calls for all mentioned utilities. So, I don't think something else could be extracted; otherwise, its responsibility (knowledge about the data structure) would be divided between classes.
So, the best method to solve I see now is dividing into 2 objects (Facade itself and real object, with private methods become public) and move real object to somewhere nobody would try to find it. In my case (Delphi) it would be a standalone unit, in other languages it could be a separate name space. Other similar option is 2 interfaces, thanks for idea.
I think you are putting too many responsibilities (implementations) into the facade. I would normally consider this to be a front-end for actual implementations that are in other classes.
So the private methods in your facade are likely to be public methods in one or more other classes. Then you can test them there.
Could somebody describe, what am I
doing wrong?
Maybe nothing?
If I want to test a method I make it default (package) scope and test it.
You already mentioned another good solution: create an interface with your two methods. You clients access those two methods and the visibility of the other methods don't matter.
Private methods are used to encapsulate some behavior that has no meaning outside of the class you are trying to test. You should never have to test private methods because only the public or protected methods of the same class will ever call private methods.
It may just be that your class is very complex and it will take significant effort to test it. However, I would suggest you look for abstractions that you can break out into their own classes. These classes will have a smaller scope of items and complexity to test.
I am not familiar with your requirements & design but it seems that your design is procedural rather than object oriented. i.e. you have 2 public methods and many private methods. If you break your class to objects where every object has its role it would be easier to test each of the "small" classes. In addition you can set the "helpers" objects access level to package (the default in Java, I know there is a similar access level in C#) this way you are not exposing them in the API but you can unit test them independently (as they are units).
Maybe if you take time and look the
Clean Code Tech talks from Miško. He is very insightfull of how code should be written in order to be tested.
This is a bit controversial topic... Most TDDers hold opinion that refactoring your methods for easier unit testing actually makes your design better. I think that this is often true, but specific case of private methods for public APIs is definitely an exception. So, yes, you should test private method, and no, you shouldn't make it public.
If you're working in Java, here's a utility method I wrote that will help you test static private methods in a class:
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import junit.framework.Assert;
public static Object invokeStaticPrivateMethod(Class<?> clazz, String methodName, Object... params) {
Assert.assertNotNull(clazz);
Assert.assertNotNull(methodName);
Assert.assertNotNull(params);
// find requested method
final Method methods[] = clazz.getDeclaredMethods();
for (int i = 0; i < methods.length; ++i) {
if (methodName.equals(methods[i].getName())) {
try {
// this line makes testing private methods possible :)
methods[i].setAccessible(true);
return methods[i].invoke(clazz, params);
} catch (IllegalArgumentException ex) {
// maybe method is overloaded - try finding another method with the same name
continue;
} catch (IllegalAccessException ex) {
Assert.fail("IllegalAccessException accessing method '" + methodName + "'");
} catch (InvocationTargetException ex) {
// this makes finding out where test failed a bit easier by
// purging unnecessary stack trace
if (ex.getCause() instanceof RuntimeException) {
throw (RuntimeException) ex.getCause();
} else {
throw new InvocationException(ex.getCause());
}
}
}
}
Assert.fail("method '" + methodName + "' not found");
return null;
}
This could probably be rewritten for non-static methods as well, but those pesky private methods usually are private so I never needed that. :)
suppose you have 8 private methods and 2 public ones. If you can execute a private method independently, i.e. without calling any of the other methods, and without state-corrupting side-effects, then unit testing just that method makes sense. But then in that case there is no need for the method to be private!
in C# i would make such methods protected instead of private, and expose them as public in a subclass for testing
given your scenario, it might make more sense for the testable methods to be public and let the user have a true facade with only the 2 public methods that they need for their interface