I'm building a java/spring application, and i may need to incorporate a stateful web service call.
Any opinions if i should totally run away from a stateful services call, or it can be done and is enterprise ready?
Statefulness runs counter to the basic architecture of HTTP (ask Roy Fielding), and reduces scalability.
Stateful web services are a pain to maintain. The mechanism I have seen for them is to have the first call return an id (basically a transaction id) that is used in subsequent calls. A problem with that is that the web service isn't really stateful so it has to load all the information that it needs from some other data store for each call.
Related
In restful web service I read something like below ,
"The constraint to the client-server interaction is that communication must be stateless. The server should not be relied upon to maintain application state by storing session objects."
so does it mean in SOAP web services the server saves the session with them ? I have used soap user interface tool for testing the soap services in which i will be sending the request XML with all the parameters and will be getting the response, In which way restful web services differed in terms of statelessness from Soap ?
No it does not mean SOAP is stateful in all cases. Typically you design all services as stateless. Having stateful services either in SOAP or REST introduces complexity which creates all sort of problems.
However some people implement both SOAP/REST services with state and they normally aren't very successful. Maintaining state requires both the server and client to be aware of and track the current state.
So in short the principle remains never store state in a service. When calling an operation it should either return a success or failed code, it should not remember that the last operation for this client did not succeed.
Also while REST says services should be stateless you can implement it in a stateful method. Never ever go stateful it creates a lot of confusion for service consumers i.e. the client. Think about it this way if an operation is stateless you can call it and get repeatable results however if it is stateful an operation might change the results based on the state.
Also stateful operations limits scalability as it has to remember more.
Following is a point mentioned in a presentation slide related to SOA, and it confuses me with the concepts of service orchestration and service choreography. To enable service choreography, shouldn't a web service be able to call another web service?
SOA builds applications out of software services. Services comprise intrinsically
unassociated, loosely coupled units of functionality that have no calls to
each other embedded in them.
In theory, a service can do anything it needs to do to accomplish its job. So there doesn't seem to be a good reason to forbid using a second service to do your work. Why reinvent the wheel?
In practice, the issue is more complicated. If you start calling other services on your own web server, then you'll eventually starve it of resources. At best, "real" clients will have to wait a bit longer for their answers while your web service server plays with itself.
Another issue is recursive loops: Service A calls B calls C calls A calls B ... you get the idea. A small change in one service can introduce such a loop without anyone noticing and it can sit there for a long time until it suddenly kills your server.
That is why you should build micro services in a hierarchy inside the server (i.e. below the web service layer - this is not exposed to clients). Those micro services can use each other in a top-down manner (to avoid the loops). Unit tests then make sure they behave properly.
Lastly, such reuse is very slow. Each HTTP request takes a lot of resources to create, send, parse and process. Calling an internal method directly can be 10 - 10000 times faster.
These are the main reasons why the services exposed by a single server shouldn't reuse each other via the "public client API".
Note: There are web services which build new services by using existing ones. IFTTT - "If This Then That" is one such beast.
You could adopt every concept according to your needs. In my current project we have a separate module that is responsible for the Orchestration. This is required since in real life usage, scenarios can be very complicated. So in order to be close to the actual management of your system, you need to have such one.
Another advantage of this approach is that the Separation_of_concerns is kept. Also aligns the business request with the applications, data, and infrastructure that you have. It defines policies and service levels through automated workflows, provisioning etc.
Orchestration is critical in the delivery of Cloud services too. As they are networked to allow sharing of data-processing tasks, centralized data storage, and online access to services or resources.
We have a system using EJB 3 Stateless bean which is also exposed as web service.
There's a integration request from other team that want our system to fire a notification to other system after invocation (by web services or other means). Since this is not totally related to our system I would prefer to have this feature loosely coupled with our own system instead of hard coding these features in to our system code.
Is there any feature on EJB or web services that can achieve what I desire? We would require a method level invocation listener so that when the EJB method/ web service get invoked, it can trigger a callback/message so we can do something according to it. I would expect it to be some kind of annotation/configuration for setting up JMS or something.
We are using JBoss as the application server. If there's any JBoss specific solution it's also welcomed.
I would suggest two options:
use JMS. When you mentioned loose coupling, JMS first crossed my mind - you can put a message on some queue/topic after method invocation and let the listener to perform futher actions. JMS messages can carry various kinds of objects - the only request is that class implements Serializable (ObjectMessage#setObject); other advantage is that you can (un)deploy your Stateless bean and other system independently. They can be on different JVMs.
use Interceptors. Technically, they would be invoked before your methods runs, but of course there is always some nice workaround :-) Here is the official documentation about Interceptors, but since you mentioned that you're using JBoss, there can be also found some interesting material on JBoss pages.
I always read that one reason to chose a RESTful architecture is (among others) better scalability for Webapplications with a high load.
Why is that? One reason I can think of is that because of the defined resources which are the same for every client, caching is made easier. After the first request, subsequent requests are served from a memcached instance which also scales well horizontally.
But couldn't you also accomplish this with a traditional approach where actions are encoded in the url, e.g. (booking.php/userid=123&travelid=456&foobar=789).
A part of REST is indeed the URL part (it's the R in REST) but the S is more important for scaling: state.
The server end of REST is stateless, which means that the server doesn't have to store anything across requests. This means that there doesn't have to be (much) communication between servers, making it horizontally scalable.
Of course, there's a small bonus in the R (representational) in that a load balancer can easily route the request to the right server if you have nice URLs, and GET could go to a slave while POSTs go to masters.
I think what Tom said is very accurate, however another problem with scalability is the barrier to change upon scaling. So, one of the biggest tenants of REST as it was intended is HyperMedia. Basically, the server will own the paths and pass them to the client at runtime. This allows you to change your code without breaking existing clients. However, you will find most implementations of REST to simply be RPC hiding behind the guise of REST...which is not scalable.
"Scalable" or "web scale" is one of the most abused terms when it comes to the web, the cloud and REST, and mainly used to convince management to get their support for moving their development team on board the REST train.
It is a buzzword that holds no value. If you search the web for "REST scalability" you'll find a lot of people parroting each other without any concrete evidence.
A REST service is exactly equally scalable as a service exposed over a SOAP interface. Both are just HTTP interfaces to an application service. How well this service actually scales depends entirely on how this service was actually implemented. It's possible to write a service that cannot scale as all in both REST and SOAP.
Yes, you can do things with SOAP that makes it scale worse, like rely on state and sessions. SOAP out of the box does not do this. This requires you to use a smarter load balancer, which you want anyway if you're really concerned with whatever form of scaling.
One thing that REST allows that SOAP doesn't, and that some other answers here address, is caching cacheable responses through an HTTP caching proxy or at the client side. This may make a REST service somewhat more lightly loaded than a SOAP service when a lot of operations' responses are cacheable. All this means is that fewer requests end up in your service.
The main reason behind saying a rest application is scalable is, Its built upon a HTTP protocol. Because HTTP is stateless. Stateless means it wont share anything between other request. So any request can go to any Server in a load balanced cluster. There is nothing forcing this user request go to this server. We can overcome this by using token.
Because of this statelessness,All REST application are very easy to scale. But if you want get high throughput(number of request capable in one second) in each server, then you should optimize blocking things from the application. Follow the following tips
Make each REST resource is a small entity. Don't read data from join of many tables.
Read data from near by databases
Use caches (Redis) instead of databases(You can save DISK I/O)
Always keep data sources as much as near by because these blocks will make server resources (CPU) ideal and it no other request can use that resource while it is ideal.
A reason (perhaps not the reason) is that RESTful services are sessionless. This means you can easily use a load balancer to direct requests to various web servers without having to replicate session state among all of your web servers or making sure all requests from a single session go to the same web server.
I venture that most but not all web services today are synchronous. A fundamental design decision existing if to implement asynchronous processing.
Is there value in implementing a processing queue system for asynchronous web services? It is a MOM/infrastructure decision with which I am toying. Instead of going system-to-system implement a middleware which will broker said transactions. The ease of management and tracking/troubleshooting of a spider web of services seems to make the most sense.
How best have you implemented asynchronous web services?
It is interesting I stumble into this question. I have exactly the same concern with the current project I am developing.
Our web services are develop using TIBCO technology, and they are also synchronous by default. We are considering creating a queue mechanism to process these requests asynchronously; the reason being: the back-end storage technology we have to interface with is notoriously slow (it is an imposed technology, and we have to deal with it)
Personally I am considering creating a 2nd WSDL definition for the asynchronous replies (which can occur from a few seconds to a few hours later than the request, depending on the load on the mentioned back-end storage.) Clients calling our Web Services will have to in turn implement a web service using this "2nd WSDL" to which we act as clients.
I'd be interested in knowing the directions you are exploring.