SOA: Is it preferable to implement a service instead of just writing service-ready code, when no external access is needed? - web-services

I'm working on the initial architecture for a solution for which an SOA approach has been recommended by a previous consultant. From reading the Erl book(s) and applying to previous work with services (and good design patterns in general), I can see the benefits of such an approach. However, this particular group does not currently have any traditional needs for implementing web services -- there are no external consumers, and no integration with other applications.
What I'm wondering is, are there any advantages to going with web services strictly to stick to SOA, that we couldn't get from just implementing objects that are "service ready"?
To explain, an example. Let's say you implement the entity "Person" as a service. You have to implement:
1. Business object/logic
2. Translator to service data structure
3. Translator from service data structure
4. WSDL
5. Service data structure (XML/JSON/etc)
6. Assertions
Now, on the other hand, if you don't go with a service, you only have to implement #1, and make sure the other code accesses it through a loose reference (using dependency injection, or a wrapper, etc). Then, if it later becomes apparent that a service is needed, you can just have the reference instead point to #2/#3 logic above in a wrapper object (so all caller objects do not need updating), and implement the same amount of objects without a penalty to the amount of development you have to do -- no extra objects or code have to be created as opposed to doing it all up front.
So, if the amount of work that has to be done is the same whether the service is implemented initially or as-needed, and there is no current need for external access through a service, is there any reason to initially implement it as a service just to stick to SOA?

Generally speaking you'd be better to wait.
You could design and implement a web service which was simply a technical facade that exposes the underlying functionality - the question is would you just do a straight one for one 'reflection' of that underlying functionality? If yes - did you design that underlying thing in such a way that it's fit for external callers? Does the API make sense, does it expose members that should be private, etc.
Another factor to consider is do you really know what the callers of the service want or need? The risk you run with building a service is that (as you're basically only guessing) you might need to re-write it when the first customers / callers come along. This can could result in all sorts of work including test cases, backwards compatibility if it drives change down to the lower levels, and so on.
having said that the advantage of putting something out there is that it might help spark use of the service - get people thinking - a more agile principled approach.

If your application is an isolated Client type application (a UI that connects to a service just to get data out of the Database) implementing a SOA like architecture is usually overkill.
Nevertheless there could be security, maintainability, serviceability aspects where using web services is a must. e.g. some clients needs access to the data outside the firewall or you prefer to separate your business logic/data access from the UI and put it on 1 server so that you don’t need to re-deploy the app every time some bus. rules changes.
Entreprise applications require many components interacting with each other and many developers working on it. In this type of scénario using SOA type architecture is the way to go.
The main reason to adopt SOA is to reduce the dependencies.
Enterprise Applications usually depends on a lot of external components (logic or data) and you don’t want to integrate these components by sharing assemblies.
Imagine that you share a component that implements some specific calculation, would you deploy this component to all the dependent applications? What will happen if you want to change some calculation logic? Would you ask all teams to upgrade their references and recompile and redeploy their app?
I recently posted on my blog a story where the former Architect had also choosed not to use web services and thought that sharing assemblies was fine. The result was chaos. Read more here.

As I mentioned, it depends on your requirements. If it’s a monolithically application and you’re sure you’ll never integrate this app and that you’ll never reuse the bus. Logic/data access a 2 tier application (UI/DB) is good enough.
Nevertheless this is an Architectural decision and as most of the architectural decisions it’s costly to change. Of course you can still factor in a web service model later on but it’s not as easy as you could think. Refactoring an existing app to add a service layer is usually a difficult task to accomplish even when using a good design based on interfaces. Example of things that could go wrong: data structure that are not serializable, circular references in properties, constructor overloading, dependencies on some internal behaviors…

Related

Is testable onion-style code in 3- or n-tier architecture possible without using an IoC container?

If I understand it correctly, in classic 3-tier/n-tier architecture the goal is to ultimately separate responsibilities in such a way that each layer shouldn't have to know about what is going on/being used internally in lower tiers.
However, if the objects in each tier (especially business) are structured to be testable, their dependencies are defined as part of their public contracts (when testing a 2nd-tier object with a 3rd-tier object dependency, mock/stub out the 3rd-tier object and provide it to the 2nd tier object). This means that at implementation time, the first tier is responsible for grabbing a third-tier dependency for use in constructing a 2nd-tier object. I am not opposed to this if it's a very bland object but if it is a data access component that requires, for example, a connection string, the first tier should not be responsible for that. Apart from that, the more dependencies you have, the more the top tier is responsible for instantiating and passing in all of the dependencies of each object in the slice of the onion it's using.
The only way I've ever seen this problem subverted is through the use of IoC, but I'm working in a situation where that option is off the table. Are there any other ways to structure the code to be testable but not face the problem of instantiating/providing dependencies for each tier in the top tier? I should mention I am working in a web app.
(I've gone over this post as a refresher on "the rules.")
EDIT: I think I can sum up the problem as such: without using some kind of IoC container or bootstrapper, is there a way to structure the code to be testable that doesn't violate the dependency depth principle, which is that every layer in the onion can only reference the layer below it?
It is not about a top tier itself but about bootstraper which will initialize your application. Depending on the rest of the architecture it can either be responsible for launching entry point in top tier of your application or it can simply be part of top tier initialization (that is even used with IoC frameworks).
Example in .NET:
If you are building standalone application you can initialize stuff as part of main execution path and only when everything is initialized you will launch your entry point. In case of web application or web services such bootstraper usually takes place in application start handler and your tiers are used when handing HTTP requests.
Btw. Question should be about IoC container. Not about IoC itself. IoC is the approach to control inner logic from outside - that is achieved by injecting dependencies. It is main approach for easily testable applications. IoC container is part of a framework which builds dependency hierarchies for you.
You can invert control without an IoC container, as #LadislavMrnka explained.
You can event do Dependency Inversion and benefit from loose coupling and testability without doing interface-based IoC. Events and higher-order functions are two ways to do that.
You can decouple part of your code base, no need to do it all at once. An object can have its dependencies decided and/or injected by the object that consumes it, you don't have to defer it all the way down to a bootstrapper.
Considering this, the answer to your question is "yes" and in many different ways :)
I think it's a good idea indeed to start small, making a few components testable (and tested) at first rather than trying to testable-ify and IoC-ify everything up front in one long, tedious and risky refactoring. IoC can come later, when most of your code base has been tested and sanitized.

static and dynamic evolution of services

I am reading about challenges of concurrent and networked software in pattern oriented software architecure vol 2.
Service access often involves invoking remote operations on resuable
components like OMG event service, etc. Supporting the static and
dynamic evolution of services and applications is antoher key
challenge in networked software system.
Evoution can occur in following way
Interfaces to and connectivity between component service roles can
change, often at run-time, and new service roles can be implemented
and installed into and installed into existing components.
It is even more challenging to determine how to access services that
are configured into a system 'on-demand' and whose implementations are
unknown when the system was designed origanally. Here design challenge
are two-fold.
First, an applicatoin must export new services, even though it may not know their detailed interfaces.
Second, an applicaiton must integrate these services into its own control flow and processing sequence transparently and robustly, even
at run-time.
I need your help in understanding above text by answering following questions.
What does author mean by "Interfaces to and connectivity between component service roles can change, often at run-time" ? Request to explain with easy to undestand example.
What does author mean by two points mentioned on-demand challenges which mentioned above. Request elobartion on above two points.
Thanks for your time and help.
1.What does author mean by "Interfaces to and connectivity between component service roles can change, often at run-time" ?
I'm not sure exactly. Interfaces change overtime because:
New technology standards can be adopted - say moving from SOAP to REST, or form XML to JSON, but that would happen slowly overtime through deployment - where as for me "runtime" is a memory space in which things run, and I don't see interfaces changing themselves taht fast - otherwise how could anyone integrate with them?
The API or interface contract itself changes to fulfill business need.
2.What does author mean by two points mentioned on-demand challenges which mentioned above.
Hmmm, good design patterns tend to survive time well (they never change, because they are never broken - like SOLID). The book you are refering to was written in 2000 I think - a lot has changed since then, so whilst the pattern may survive maybe the way we'd now describe it has changed (i.e what he means by "export new services" is open to interpretation)...
1.First, an application must export new services, even though it may not know their detailed interfaces.
Separation Of Concerns (basic OO stuff), all parts of your app don't (shouldn't) inherently know what the other parts are doing internally; likewise, as long as someone (including an external system) is satisfying the interface then who cares how it does so internally.
2.Second, an application must integrate these services into its own control flow and processing sequence transparently and robustly, even
at run-time.
I take this to mean that the program should never break, it should always compile, and if the application is dynamically creating and executing code (say based on user input) then there needs to be checks in-place so that the dynamic code doesn't break the app either.

services based architecture should not necessarily imply distribution?

In my workplace (and a lot of other areas), there is a lot of emphasis on building architecture around services. (I am working in an e-commerce startup). However, I think services are implicitly considered as distributed. I am a believer of the first law of distribution - "don't distribute". So, I believe that we should not un-necessarily complicate architecture. It should be an architecture which can evolve. So, one of the ways to approach the problem would be to create well defined namespaces and build code around it, but keep the communication via java api. (this keeps monitoring requirement low, and reliability/availability problems low). This can easily be evolved into a distributed architecture by wrapping modules into web service, as and when, the scale requirements kick-in. So, the question is - what are the cons of writing code as a single application and evolving into distributed services, rather than straight jumping into implementing web services based architecture? Am I right in assuming that services should imply the basic principles of design (abstraction, encapsulation etc), rather than distribution over network?
Distribution requires modularity. However, it requires more than just modularity: it also requires coarse-grained interaction between the modules.
For example, in a single-process ecommerce system, you might have separate modules for managing the user's shopping cart and calculating prices. They might interact by the cart asking the calculator to price an item, then another item, etc. That would be perfectly fine.
However, in a distributed system, that would require a torrent of small method calls, which is inefficient; you might get away with it if you used CORBA for distribution, but with SOAP, you'd be in trouble. Rather, you would want to have the cart ask the calculator to price the whole order in one go. That might be worse from a separation of concerns point of view (why should the calculator have to know about the idea of carts?), but it would be required to make the system perform adequately.
Related to granularity, there's also the problem of modules interacting via interfaces or implementations. With a single process, you can define a set of interfaces through which modules will interact; modules can pass each other objects implementing those interfaces without having to tell each other about the implementations (eg a scheduler module could be passed anything implementing interface Job { void run(); }). Across a network, the requirement for coarse grain means that any objects passed must be passed by value (because passing by reference would entail fine-grained calls back to the passing module - unless you were using mobile code, which you aren't, because nobody is), which means that both modules must know about and agree on the implementations of the objects.
So, while building a single-process system in a modular way makes it easier to implement SOA later, it doesn't make it as simple as wrapping each module in a SOAP interface. At least, not unless you build your system in a coarse-grained manner from the start, which means throwing away a number of sound and helpful good software engineering practices.

Testing system where App-level and Request-level IoC containers exist

My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there.
We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy.
Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing.
The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container.
We are also using MSTest, which presents some concurrency dilemmas during testing as well.
So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament?
How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers.
I hope the question is clear, thanks!
Thanks for the replies. I agree that using Service Location is not the optimal approach - but it does seem necessary for this situation. The scenario is that we need our Entities to resolve dependencies, on-demand, only when needed - for business rule validation. Forcing all our entities, on being materialized by NHibernate, to undergo constructor injection, doesn't seem appropriate, at a minimum for performance reasons.
We're considering a solution where the containers are stored either in the HttpApplication/HttpContext at real runtime, and in static/ThreadStatic fields during test. StructureMap has a similar approach baked-in. Any thoughts on this kind of solution? Thanks!
Also, this isn't necessarily integration testing (although it may play into that too). For example, we want to unit-test a particular entity's business rule behavior--during which this scenario will unfold.
I am definitely open to the Http object abstractions - I've used them and loved them in MVC; how can one get them going outside of MVC?
DI Containers should not be necessary during unit testing. Rather, a DI Container is used at application startup time to resolve the application's dependency graph, and then get out of the way.
However, it sounds like you have applied the Service Locator anti-pattern, and you are now feeling the pain of that. Unfortunately, there's no easy way out of this.
You obviously can't rely on the real HTTP Context during unit testing, as it will not be available to you in that environment, so you will need to hide them away behind interfaces. If you are using .NET 3.5 SP1, you might be able to use the abstractions introduced in System.Web.Abstractions, but otherwise, you can extract some yourself.
Once you have introduced these Seams into your system, you can use proper Dependency Injection (preferably Constructor Injection) to inject them into your consuming classes.
In any case, following Test-Driven Development can very effectively prevent this type of tight coupling from being introduced in the first place.

What do you put in your webservice?

I have a website (ASP.NET) and some winforms(.Net 2.0) for a project (written in C#). I use the webservice (IIS6) for task that both require like sending email inside the business.
I think Webservice is nice but I would like from your experience what should and what should not be in a webservice?
In My Opinion:
Web services should be reserved for code that
You either can't or don't want to distribute; or,
code that needs to seriously scale up.
One example is custom business logic that multiple applications need access to.
Code you don't want to put into web services include:
code that is performance based;
code that applies only to the application in question.
Well it sounds like you have a limited Service Oriented Architecture (at least, that's what I think you're getting at), which according to Gartner means you'll be rich soon. :)
I find that the benefit of SOA for me really comes down to the heterogeneity of the systems involved (sounds like yours doesn't qualify there because it's all .NET), and the negative of SOA is primarily because of the verbose nature of XML. True, you don't need XML for SOA, but it's the current majority, IMHO.
But if you're not concerned about the bandwidth/parsing penalties, who cares? Maybe you're not piping through 10,000 service calls a minute. With this style of implementation, you're following DRY, just with a WS instead of a sub, and you're adhering to a standard that is by nature compatible with multiple systems.
There's worse approaches.
It seems like the new trend for web services/SOA is to more or less expose a light-weight middle tier that your host application can use. Instead of having individual method calls exposed through a service (as in your example), SOA-oriented applications have extensive Data/Operation contracts that act as the "traditional" middle tier assembly.
As little as possible, while still being useful.
By default, DON'T put every field of the return objects in the return data, and DON'T expose every method of an existing class.
read this too...