Biztalk and the best way to call web service - web-services

I am writing a biztalk orchestration that will need to call a web service, probably multiple web services, and probably more than once. I see two options before me; one, consume the wsdl in a separate code project, and call the web services from code in an expression shape, and two, consume it from Biz, get schemas, etc and call through request/response ports. What is the best practice here? On the one hand, if the wsdl is updated it will be easier to update the code than the schemas and ports, and it seems like a lot of clutter and work to build ports enough for multiple web service calls. On the other hand, all the tuning you can do at the port level(retries being one) makes it robust to call web service.

Also see this question here, which discusses a 3rd option, viz using add service reference in BizTalk as an alternative method to import XSD's.
IMO you would be defeating the point of using BizTalk by using .NET proxies to handle integration. For example:
You are hard coding the protocol (WCF), and now need to marshal request and response messages to / from your custom code. With a send port, any request-response mechanism can be configured at deployment time - especially useful for unit and integrating testing.
You will be losing all of the benefit of BizTalk's message delivery mechanisms, such as retries, backup transports, resume suspended messages, different maps for different ports, and arguably the whole pub sub ability, (e.g. what if multiple listeners want to listen to the responses from the called web services?)
Where will you store the WCF serviceModel config settings, such as the endpoint etc? i.e. You've lost the flexibility of binding files.
etc.
So, TL;DR Always use the WCF adapters in BizTalk
However, that said, am in agreement that updating generated items if the consumed service changes can be messy. FWIW, we mitigate some of this as follows:
Always create a separate, empty folder in which to import all the
imported generated artifacts.
Leave all the generated items 'as-is', i.e. don't be tempted to move the dummy .odx, or delete it (since it has the preconfigured Port Types)
Unfortunately this leaves the below actions which still need to be manually applied:
Remember to change the visibility of the Port Types to public if the artifacts are in a separate assembly to your orchestrations
Promoted and distinguished properties on the imported schemas need to be reapplied (e.g. remember to document screen shots after any change). Possibly this can be simplified or automated by saving and re-pasting in the <xs:annotation> section of the schma.
If you are using message contracts in your WCF service, and are reusing the same referenced messages across multiple applications, you will need to manually delete the duplicates created by the add generated items and then re-reference the existing schemas. (e.g. we have a standard 'response' message back to all BizTalk calls)

Interestingly, you can have a mixture of both infact. Check this out by Saravana Kumar!!!
It uses passthrough receive and consumes a webservice using the dll on the send port, without going through the pain of creating schemas and webports.
This gives all the power of Biztalk ( routing response, send port configuration, etc) and still the flexibility to change the schema without much fuss.

Related

SOAP Pooling Advantages / Disadvantages

I am doing some research on SOAP, for a personal project, and I came across a website with a list of pros and cons for using SOAP, and I understood what most of them meant, except for this one under disadvantages:
SOAP is typically limited to pooling, and not event notifications, when leveraging HTTP for transport. What's more, only one client can use the services of one server in typical situations.
From my understanding of pooling, there should be no issue pooling a SOAP Object for re usability. Pooling is simply a way to use the same resources over and over again, like a connection to a database. Also not entirely certain on the context of Event Notifications.
So my two questions here are, what does the above block quoted text actually mean, and is this information correct?
Website: http://searchsoa.techtarget.com/definition/SOAP
SOAP is RPC, and in RPC some local client invokes a method on some remote target and receives a result. That's how it works, so SOAP works that way too. A client invokes a service asking for something and the service just responds.
If you want "events" in this type of communication the most simple approach is to invoke the service more often (i.e. polling). This has the advantage that nothing changes for the server or the client. It's the same RPC call but done more frequently.
These days everyone is connected to the web and everyone is subscribed to all sorts of services. They want to get notified as soon as something happens to the world around them. Pooling becomes inefficient in this sea of users and services because you are wasting resources. You might poll a service a hundred times just to get back one notification. For this reason technology is evolving so that resource use is minimized. And the direction this is moving to is push services.
Now almost everything happens in the browser. Every browser manufacturer rushes to implement the latest technology changes and HTML5 spec. This means actual pages that push notifications to users instead of faking it with Ajax, comet, etc.
SOAP has been around since 1998 and it's not moving as fast as the rest of the web, mainly because SOAP is mostly an enterprise player and because it's a protocol. Because it's a protocol you have to make new technology available to it without breaking that protocol. Things move slower so people have abandoned SOAP in favor of other ways of doing server-client communication.
SOAP is typically limited to pooling, and not event notifications...
That is correct. But be aware that "typically" does not mean "always".
You can have events, but it's harder. It involves using WS-* specifications like WS-Eventing and WS-Addressing. This is a change in the way SOAP clients operate because a client now becomes some sort of a service too because it needs to receive calls too, not just initiate them. If your technology stack implements these specifications then good for you, but if it doesn't, then you have to build it yourself and it's a real pain.
So for these reasons, if you don't have blocking performance or resource usage issues, you "typically" chose doing polling with SOAP and not event notifications.

Can a service call another service inside its code?

Following is a point mentioned in a presentation slide related to SOA, and it confuses me with the concepts of service orchestration and service choreography. To enable service choreography, shouldn't a web service be able to call another web service?
SOA builds applications out of software services. Services comprise intrinsically
unassociated, loosely coupled units of functionality that have no calls to
each other embedded in them.
In theory, a service can do anything it needs to do to accomplish its job. So there doesn't seem to be a good reason to forbid using a second service to do your work. Why reinvent the wheel?
In practice, the issue is more complicated. If you start calling other services on your own web server, then you'll eventually starve it of resources. At best, "real" clients will have to wait a bit longer for their answers while your web service server plays with itself.
Another issue is recursive loops: Service A calls B calls C calls A calls B ... you get the idea. A small change in one service can introduce such a loop without anyone noticing and it can sit there for a long time until it suddenly kills your server.
That is why you should build micro services in a hierarchy inside the server (i.e. below the web service layer - this is not exposed to clients). Those micro services can use each other in a top-down manner (to avoid the loops). Unit tests then make sure they behave properly.
Lastly, such reuse is very slow. Each HTTP request takes a lot of resources to create, send, parse and process. Calling an internal method directly can be 10 - 10000 times faster.
These are the main reasons why the services exposed by a single server shouldn't reuse each other via the "public client API".
Note: There are web services which build new services by using existing ones. IFTTT - "If This Then That" is one such beast.
You could adopt every concept according to your needs. In my current project we have a separate module that is responsible for the Orchestration. This is required since in real life usage, scenarios can be very complicated. So in order to be close to the actual management of your system, you need to have such one.
Another advantage of this approach is that the Separation_of_concerns is kept. Also aligns the business request with the applications, data, and infrastructure that you have. It defines policies and service levels through automated workflows, provisioning etc.
Orchestration is critical in the delivery of Cloud services too. As they are networked to allow sharing of data-processing tasks, centralized data storage, and online access to services or resources.

Best practice to integrate web services (with Camel)

I have the following situation. Several services provide their functionality mainly via SOAP interfaces. There is one module that wants to consume this functionality for integration into a website. What would be the best practice to do that?
The functionality of the services is subject to change. Therefore, each single function/method should be "reroutable". The web service is probably hosted on a different machine.
Is it reasonable to map all web services to JMS queues (my first idea)? The website module would only talk to JMS then. A router would route all incoming JMS messages to the different web services (or elsewhere).
Or: There could be one dedicated web service, that integrates all functions, to be used exclusively by the web site? The advantage here would be that parameters and return values are typed.
What would you suggest? What could be another, better approach?
If I understood you right, what you're aiming at is providing a homogeneous interface, a coherent API for your webapp module, serving a purpose of a facade for multiple remote interfaces (mainly SOAP ones).
Regarding the JMS approach you've mentioned - it seems reasonable, but:
instead of many queues I'd rather go with a single JMS destination with a Camel content-based router immediately after the queue (or two queues for Request/Reply pattern) That would make things "reroutable" and isolate the web module from service changes.
your requests would be less prone to services-related errors, brief problems in service availability and such, while still retaining an ability to perform Request/Reply style calls (which are common to RPC-ish nature of SOAP).
i'd use one-way style calls wherever they are applicable (for increased reliability and reactivity).
You should not worry about lack of typing, WSDL+SOAP seems to enforce strong typing but that's an illusion driven by auto-generated stubs. You still have to marshal the data back and forth.
Instead of SOAP I'd go with JSON, as it is far cleaner and less redundant than XML (probably faster, but that's usually irrelevant). Jackson is a very efficient JSON library and it's already supported in Camel distributions. JMS ObjectMessage is a big NO (a good article on some of the ObjectMessage pitfalls )
The single-service approach seems to be a good way to separate the web-module from the service layer. It lacks the flexibility and fault-tolerance of the JMS approach but seems tad easier to implement.
If there are many calls that can be concluded in a one-way fashion, I'd say go with JMS and reroute the messages after the queue.

Why are RESTful Applications easier to scale

I always read that one reason to chose a RESTful architecture is (among others) better scalability for Webapplications with a high load.
Why is that? One reason I can think of is that because of the defined resources which are the same for every client, caching is made easier. After the first request, subsequent requests are served from a memcached instance which also scales well horizontally.
But couldn't you also accomplish this with a traditional approach where actions are encoded in the url, e.g. (booking.php/userid=123&travelid=456&foobar=789).
A part of REST is indeed the URL part (it's the R in REST) but the S is more important for scaling: state.
The server end of REST is stateless, which means that the server doesn't have to store anything across requests. This means that there doesn't have to be (much) communication between servers, making it horizontally scalable.
Of course, there's a small bonus in the R (representational) in that a load balancer can easily route the request to the right server if you have nice URLs, and GET could go to a slave while POSTs go to masters.
I think what Tom said is very accurate, however another problem with scalability is the barrier to change upon scaling. So, one of the biggest tenants of REST as it was intended is HyperMedia. Basically, the server will own the paths and pass them to the client at runtime. This allows you to change your code without breaking existing clients. However, you will find most implementations of REST to simply be RPC hiding behind the guise of REST...which is not scalable.
"Scalable" or "web scale" is one of the most abused terms when it comes to the web, the cloud and REST, and mainly used to convince management to get their support for moving their development team on board the REST train.
It is a buzzword that holds no value. If you search the web for "REST scalability" you'll find a lot of people parroting each other without any concrete evidence.
A REST service is exactly equally scalable as a service exposed over a SOAP interface. Both are just HTTP interfaces to an application service. How well this service actually scales depends entirely on how this service was actually implemented. It's possible to write a service that cannot scale as all in both REST and SOAP.
Yes, you can do things with SOAP that makes it scale worse, like rely on state and sessions. SOAP out of the box does not do this. This requires you to use a smarter load balancer, which you want anyway if you're really concerned with whatever form of scaling.
One thing that REST allows that SOAP doesn't, and that some other answers here address, is caching cacheable responses through an HTTP caching proxy or at the client side. This may make a REST service somewhat more lightly loaded than a SOAP service when a lot of operations' responses are cacheable. All this means is that fewer requests end up in your service.
The main reason behind saying a rest application is scalable is, Its built upon a HTTP protocol. Because HTTP is stateless. Stateless means it wont share anything between other request. So any request can go to any Server in a load balanced cluster. There is nothing forcing this user request go to this server. We can overcome this by using token.
Because of this statelessness,All REST application are very easy to scale. But if you want get high throughput(number of request capable in one second) in each server, then you should optimize blocking things from the application. Follow the following tips
Make each REST resource is a small entity. Don't read data from join of many tables.
Read data from near by databases
Use caches (Redis) instead of databases(You can save DISK I/O)
Always keep data sources as much as near by because these blocks will make server resources (CPU) ideal and it no other request can use that resource while it is ideal.
A reason (perhaps not the reason) is that RESTful services are sessionless. This means you can easily use a load balancer to direct requests to various web servers without having to replicate session state among all of your web servers or making sure all requests from a single session go to the same web server.

Add SOAP to an existing GWT solution

I am looking for a clean way to add service oriented access to an existing GWT application (client + RemoteService based server). The thing is that all the services are already in place, described by the #RemoteServiceRelativePath notation. It would be nice to be able to actually add the #WebService notation and have access to them both with RPC and XML/JSON/..
The real problem is that extending a current application to support other clients than the existing GWT one is a bit hard because of the GWT obfuscation. This also leads to an unneeded coupling between client and server since they both need to be deployed at the same time, because of the .gwt.rpc generated files.
I would like to reuse the existing RemoteService interfaces to define web services and connect to them with new clients via a plain-text protocol. Additionally, I would like to port the existing GWT client to the same protocol.
Is it possible to do this while using the same interfaces and implementation just by annotation?
What would be the best way to port the existing client to use a plain text protocol, RequestBuilder? Or just inject a new serialization implementation that does xml / json?
I don't even know where to start with this, this is why I'm asking. Maybe it is better to rewrite all the services and port everything at once but it will break everything until this is finished.
We've had a different approach since GWT the coupling of GWT between server and client side is not all bad but gives you a nice integration and you don't have to think too much about communication issues etc.
For that, our application had a frontend tier which consisted of the full gwt stack (client + server-coupling) and on the server-side, we connected via spring and RPC to the service layer.
On that way you can use on the benefits of spring and you don't loose the comfort of GWT.
But I Would like to hear if somebody already has gone other ways ;)
This is rather late and GWT is not the wonderchild it once was. However, for the sake of tying loose ends here's the solution I went for:
create a Java generator that parses all model (shared client/server classes) files through reflection and generates a Java file that reads/writes SOAP objects
bootstrap the above into a generic Java handler that handles native objects + array, sets, maps
write the service that can deal with the generated XML from the files above
It sounds a bit terse and a bit complicated but it 'only' took ~1 month to write the code to reliably convert >200 objects to their XML representation, automatically. The added benefit is that it allows mocking and cross-platform clients/servers.
As a summary, the generated code creates new methods 'fromXML' and 'toXML' that feed the fields that are public members (get/set) in the given class. So, given MyClass it would generate the MyClassSerializer and MyClassDeserializer Java classes that implement those SOAP-specific methods and also publish themselves to a 'dispatcher'. So whenever that dispatcher sees 'MyClass' it would know where to get the ser/deser functions from.