My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there.
We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy.
Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing.
The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container.
We are also using MSTest, which presents some concurrency dilemmas during testing as well.
So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament?
How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers.
I hope the question is clear, thanks!
Thanks for the replies. I agree that using Service Location is not the optimal approach - but it does seem necessary for this situation. The scenario is that we need our Entities to resolve dependencies, on-demand, only when needed - for business rule validation. Forcing all our entities, on being materialized by NHibernate, to undergo constructor injection, doesn't seem appropriate, at a minimum for performance reasons.
We're considering a solution where the containers are stored either in the HttpApplication/HttpContext at real runtime, and in static/ThreadStatic fields during test. StructureMap has a similar approach baked-in. Any thoughts on this kind of solution? Thanks!
Also, this isn't necessarily integration testing (although it may play into that too). For example, we want to unit-test a particular entity's business rule behavior--during which this scenario will unfold.
I am definitely open to the Http object abstractions - I've used them and loved them in MVC; how can one get them going outside of MVC?
DI Containers should not be necessary during unit testing. Rather, a DI Container is used at application startup time to resolve the application's dependency graph, and then get out of the way.
However, it sounds like you have applied the Service Locator anti-pattern, and you are now feeling the pain of that. Unfortunately, there's no easy way out of this.
You obviously can't rely on the real HTTP Context during unit testing, as it will not be available to you in that environment, so you will need to hide them away behind interfaces. If you are using .NET 3.5 SP1, you might be able to use the abstractions introduced in System.Web.Abstractions, but otherwise, you can extract some yourself.
Once you have introduced these Seams into your system, you can use proper Dependency Injection (preferably Constructor Injection) to inject them into your consuming classes.
In any case, following Test-Driven Development can very effectively prevent this type of tight coupling from being introduced in the first place.
Related
If I understand it correctly, in classic 3-tier/n-tier architecture the goal is to ultimately separate responsibilities in such a way that each layer shouldn't have to know about what is going on/being used internally in lower tiers.
However, if the objects in each tier (especially business) are structured to be testable, their dependencies are defined as part of their public contracts (when testing a 2nd-tier object with a 3rd-tier object dependency, mock/stub out the 3rd-tier object and provide it to the 2nd tier object). This means that at implementation time, the first tier is responsible for grabbing a third-tier dependency for use in constructing a 2nd-tier object. I am not opposed to this if it's a very bland object but if it is a data access component that requires, for example, a connection string, the first tier should not be responsible for that. Apart from that, the more dependencies you have, the more the top tier is responsible for instantiating and passing in all of the dependencies of each object in the slice of the onion it's using.
The only way I've ever seen this problem subverted is through the use of IoC, but I'm working in a situation where that option is off the table. Are there any other ways to structure the code to be testable but not face the problem of instantiating/providing dependencies for each tier in the top tier? I should mention I am working in a web app.
(I've gone over this post as a refresher on "the rules.")
EDIT: I think I can sum up the problem as such: without using some kind of IoC container or bootstrapper, is there a way to structure the code to be testable that doesn't violate the dependency depth principle, which is that every layer in the onion can only reference the layer below it?
It is not about a top tier itself but about bootstraper which will initialize your application. Depending on the rest of the architecture it can either be responsible for launching entry point in top tier of your application or it can simply be part of top tier initialization (that is even used with IoC frameworks).
Example in .NET:
If you are building standalone application you can initialize stuff as part of main execution path and only when everything is initialized you will launch your entry point. In case of web application or web services such bootstraper usually takes place in application start handler and your tiers are used when handing HTTP requests.
Btw. Question should be about IoC container. Not about IoC itself. IoC is the approach to control inner logic from outside - that is achieved by injecting dependencies. It is main approach for easily testable applications. IoC container is part of a framework which builds dependency hierarchies for you.
You can invert control without an IoC container, as #LadislavMrnka explained.
You can event do Dependency Inversion and benefit from loose coupling and testability without doing interface-based IoC. Events and higher-order functions are two ways to do that.
You can decouple part of your code base, no need to do it all at once. An object can have its dependencies decided and/or injected by the object that consumes it, you don't have to defer it all the way down to a bootstrapper.
Considering this, the answer to your question is "yes" and in many different ways :)
I think it's a good idea indeed to start small, making a few components testable (and tested) at first rather than trying to testable-ify and IoC-ify everything up front in one long, tedious and risky refactoring. IoC can come later, when most of your code base has been tested and sanitized.
In my workplace (and a lot of other areas), there is a lot of emphasis on building architecture around services. (I am working in an e-commerce startup). However, I think services are implicitly considered as distributed. I am a believer of the first law of distribution - "don't distribute". So, I believe that we should not un-necessarily complicate architecture. It should be an architecture which can evolve. So, one of the ways to approach the problem would be to create well defined namespaces and build code around it, but keep the communication via java api. (this keeps monitoring requirement low, and reliability/availability problems low). This can easily be evolved into a distributed architecture by wrapping modules into web service, as and when, the scale requirements kick-in. So, the question is - what are the cons of writing code as a single application and evolving into distributed services, rather than straight jumping into implementing web services based architecture? Am I right in assuming that services should imply the basic principles of design (abstraction, encapsulation etc), rather than distribution over network?
Distribution requires modularity. However, it requires more than just modularity: it also requires coarse-grained interaction between the modules.
For example, in a single-process ecommerce system, you might have separate modules for managing the user's shopping cart and calculating prices. They might interact by the cart asking the calculator to price an item, then another item, etc. That would be perfectly fine.
However, in a distributed system, that would require a torrent of small method calls, which is inefficient; you might get away with it if you used CORBA for distribution, but with SOAP, you'd be in trouble. Rather, you would want to have the cart ask the calculator to price the whole order in one go. That might be worse from a separation of concerns point of view (why should the calculator have to know about the idea of carts?), but it would be required to make the system perform adequately.
Related to granularity, there's also the problem of modules interacting via interfaces or implementations. With a single process, you can define a set of interfaces through which modules will interact; modules can pass each other objects implementing those interfaces without having to tell each other about the implementations (eg a scheduler module could be passed anything implementing interface Job { void run(); }). Across a network, the requirement for coarse grain means that any objects passed must be passed by value (because passing by reference would entail fine-grained calls back to the passing module - unless you were using mobile code, which you aren't, because nobody is), which means that both modules must know about and agree on the implementations of the objects.
So, while building a single-process system in a modular way makes it easier to implement SOA later, it doesn't make it as simple as wrapping each module in a SOAP interface. At least, not unless you build your system in a coarse-grained manner from the start, which means throwing away a number of sound and helpful good software engineering practices.
I'm working on the initial architecture for a solution for which an SOA approach has been recommended by a previous consultant. From reading the Erl book(s) and applying to previous work with services (and good design patterns in general), I can see the benefits of such an approach. However, this particular group does not currently have any traditional needs for implementing web services -- there are no external consumers, and no integration with other applications.
What I'm wondering is, are there any advantages to going with web services strictly to stick to SOA, that we couldn't get from just implementing objects that are "service ready"?
To explain, an example. Let's say you implement the entity "Person" as a service. You have to implement:
1. Business object/logic
2. Translator to service data structure
3. Translator from service data structure
4. WSDL
5. Service data structure (XML/JSON/etc)
6. Assertions
Now, on the other hand, if you don't go with a service, you only have to implement #1, and make sure the other code accesses it through a loose reference (using dependency injection, or a wrapper, etc). Then, if it later becomes apparent that a service is needed, you can just have the reference instead point to #2/#3 logic above in a wrapper object (so all caller objects do not need updating), and implement the same amount of objects without a penalty to the amount of development you have to do -- no extra objects or code have to be created as opposed to doing it all up front.
So, if the amount of work that has to be done is the same whether the service is implemented initially or as-needed, and there is no current need for external access through a service, is there any reason to initially implement it as a service just to stick to SOA?
Generally speaking you'd be better to wait.
You could design and implement a web service which was simply a technical facade that exposes the underlying functionality - the question is would you just do a straight one for one 'reflection' of that underlying functionality? If yes - did you design that underlying thing in such a way that it's fit for external callers? Does the API make sense, does it expose members that should be private, etc.
Another factor to consider is do you really know what the callers of the service want or need? The risk you run with building a service is that (as you're basically only guessing) you might need to re-write it when the first customers / callers come along. This can could result in all sorts of work including test cases, backwards compatibility if it drives change down to the lower levels, and so on.
having said that the advantage of putting something out there is that it might help spark use of the service - get people thinking - a more agile principled approach.
If your application is an isolated Client type application (a UI that connects to a service just to get data out of the Database) implementing a SOA like architecture is usually overkill.
Nevertheless there could be security, maintainability, serviceability aspects where using web services is a must. e.g. some clients needs access to the data outside the firewall or you prefer to separate your business logic/data access from the UI and put it on 1 server so that you don’t need to re-deploy the app every time some bus. rules changes.
Entreprise applications require many components interacting with each other and many developers working on it. In this type of scénario using SOA type architecture is the way to go.
The main reason to adopt SOA is to reduce the dependencies.
Enterprise Applications usually depends on a lot of external components (logic or data) and you don’t want to integrate these components by sharing assemblies.
Imagine that you share a component that implements some specific calculation, would you deploy this component to all the dependent applications? What will happen if you want to change some calculation logic? Would you ask all teams to upgrade their references and recompile and redeploy their app?
I recently posted on my blog a story where the former Architect had also choosed not to use web services and thought that sharing assemblies was fine. The result was chaos. Read more here.
As I mentioned, it depends on your requirements. If it’s a monolithically application and you’re sure you’ll never integrate this app and that you’ll never reuse the bus. Logic/data access a 2 tier application (UI/DB) is good enough.
Nevertheless this is an Architectural decision and as most of the architectural decisions it’s costly to change. Of course you can still factor in a web service model later on but it’s not as easy as you could think. Refactoring an existing app to add a service layer is usually a difficult task to accomplish even when using a good design based on interfaces. Example of things that could go wrong: data structure that are not serializable, circular references in properties, constructor overloading, dependencies on some internal behaviors…
What unit tests generally tend to be hard to write and why? I am particularly interested in methods which don't need mocking.
Thanks
Two cases where unit testing is made difficult:
Methods that invoke static methods that belong to other classes, particularly when those other classes have static state, or do significant work. Being stuck trying to "unit" test a method that, through transitive closure, does database queries can suck.
Methods that create instances of other classes directly (i.e., via new), particularly when the constructor of the other class does itself requires static state, or when it does significant work in the constructor.
A great A to Z guide of testability concerns with side by side code examples of easy/hard to test code can be found in Misko's extensive testability guide.
Click on the "flaw #x" links (they look like plain text but they're separate links).
Big, complex methods that do lots of things at the same time that really should've been separated. (example: get something from a configuration object, create a URL based on some variables, encode the URL, send a request, do something with the response... you get the drill).
Everything static. Things created with New, although I haven't found a proper way to avoid it without spamming the entire application with factories.
It's almost always about dependencies.
Most code depends on external systems such as databases, file systems, email clients, networks, etc. It's also common to have dependencies on major internal systems (e.g, the spell checking module, or the recalc engine...).
If these dependences are not easily substitutable, then the system becomes hard to test.
Classes that call statics and singletons are the worst offenders, but any class that doesn't accept it's dependencies via constructor or properties will be hard to test.
There are some legitimate situations that are hard to test:
Concurrency
User Interface - this is why the trend is towards MVC architectures that create ViewModels which can be easily tested. The actual rendering is minimized - this is called the humble dialog or humble object pattern in the test literature.
I've been doing ColdFusion for 2 years, and I've always used ColdSpring for injecting dependency. I want to try to see if I can survive without them. What're the alternatives?
For singleton:
onApplicationStart() and inject services to Application scope?
For transient:
Factory pattern? XXXFactory.createXXX()? or... XXXService.createXXX()?
Please comment, and share your alternative.
Henry,
I would write a 'DIManager' CFC to manage my own dependencies and persist the 'DIManager' in the Application scope using onApplicationStart() so it would be available for the life of the application.
Each service would be responsible for creating the transients it services as you recommended within your question.
I would opt for using ColdFusion 9s cache methods within my 'DIManager' to manage the persistence of the singletons as I expect even greater support for machine storage mechanisms as ColdFusion evolves, and, you could define profiles for each singleton so that some expire after a period of time while others live for the life of the application. This would provide greater control than using the Application scope. However, the profile could place an object in a clustered scope, server scope, etc..., depending on what your specific challenge is.
I almost went this route for a project I am about to complete, but, decided to not reinvent the wheel and simply went with ColdBox since it has fantastic caching abilities. I should also add, the ColdBox team has almost completed their goal of breaking the framework into separate units. The final separate piece is WireBox which should be released soon--so, if you have limitations on using a framework, dont like MVC or AOP, you can write your application in your own way and still use WireBox or the other great IoC frameworks that already exist (like the one you have been using :).
Hope that helps.
I look forward to other answers as well.
There are certainly cases cases where a DI framework hides some code smells, say passing in a pile of parameters automagically. By doing things by hand, or at least knowing what doing so would entail you'll make your designs cleaner.
It's probably a bit like learning C, even if you don't use it often it good stuff to know.
There's an intersting article about do-it-yourself DI here that focuses on Java but might be worth your while.
These are all great suggestions. My main goal lately when setting up the family of supporting services and whatnot has been to brace for caching and divorce the application code from the API's inner workings. Specifically, this translates into always using factories to generate transients and always having singleton services that receive requests from the application.
I don't think I can live without AOP anymore though. I've been able to solve so many surprise issues with layered interceptors that I should really build a small shrine at my desk to worship AOP from.
So, in summary, when building your own solution try to implement singleton services and transient factories. AOP is a huge bonus, but I couldn't tell you how to implement that. I'm a ColdSpring user, and thankful it does what it does!