Why RESTful servics should be stateless? What's the benefit? - web-services

I have heard enough about RESTful service should be stateless. All state info should be stored in client. And each request should contain all the necessary state info.
But why? What's the benefit of doing that? Only when I know its benefit/motivation can I use it properly.
What if my client have a huge amount of state? Suppose there's an online document editing application. Does client have to send the full text he/she is editing when calling server's RESTful API? Or is this scenario simply not suitable for RESTful approach?

When talking about REST (or well RESTful since not many people adhere 100% to the paper I will quote here) services I always think it's best to start with the source, meaning Fielding dissertation which mentions in 5.1.3 Stateless:
This constraint induces the properties of visibility, reliability, and scalability. Visibility
is improved because a monitoring system does not have to look beyond a single request
datum in order to determine the full nature of the request. Reliability is improved because
it eases the task of recovering from partial failures [133]. Scalability is improved because
not having to store state between requests allows the server component to quickly free
resources, and further simplifies implementation because the server doesn’t have to
manage resource usage across requests.
It goes even further talking about its trade-offs:
Like most architectural choices, the stateless constraint reflects a design trade-off. The
disadvantage is that it may decrease network performance by increasing the repetitive data
(per-interaction overhead) sent in a series of requests, since that data cannot be left on the
server in a shared context. In addition, placing the application state on the client-side
reduces the server’s control over consistent application behavior, since the application
becomes dependent on the correct implementation of semantics across multiple client
versions.
But Fielding doesn't stop even there, he talks about caching to overcome some of the problems.
I highly recommend you go through that PDF, since (from what I remember) that was the original paper that introduced REST.
The use case you provided is a tough one and as many said it depends on your exact scenario. RESTful services are called restFUL and not REST because people found the original paper too limiting and decided to loosen up a bit the rules (for instance the original paper doesn't say anything about batch operations).

The primary benefit is scalability -- by not needing to fetch additional context for each request, you minimize the amount of work done by the server, which may need to service many requests at the same time.
Additionally, it helps provide greater clarity to consumers of your API. By having the user send everything related to the operation being done, they can more clearly see what is actually being done, and the error messages they get can often be more direct as a result; an error can say what value is wrong and why, rather than trying to communicate that something the consumer can't see went wrong on the server.

From the same chapter of Fielding's dissertation:
Like most architectural choices, the stateless constraint reflects a
design trade-off. The disadvantage is that it may decrease network
performance by increasing the repetitive data (per-interaction
overhead) sent in a series of requests, since that data cannot be left
on the server in a shared context.
Advantages are explained as follows:
This constraint induces the properties of visibility, reliability, and
scalability.
Visibility is improved because a monitoring system does
not have to look beyond a single request datum in order to determine
the full nature of the request.
Reliability is improved because it
eases the task of recovering from partial failures [133].
Scalability is improved because not having to store state between requests allows
the server component to quickly free resources, and further simplifies
implementation because the server doesn't have to manage resource
usage across requests.
Regarding your specific case, yes and no. This is how the Web works. When we edit something online, we send entire request to the server. Though it is a design choice how we implement partial updates.
Software can be designed to accomplish this goal by sending PUT/POST requests to sub-resources. For example:
PUT /book/chapter1 HTTP/1.1
PUT /book/chapter2 HTTP/1.1
PUT /book/chapter3 HTTP/1.1
instead of updating whole resource:
PUT /book HTTP/1.1
Content-Type: text/xyz
Content-Length: ...

Related

Web service and transactional guarantees

How do you integrate applications via web services and deal with technical errors like connectivity errors for web service calls which change state?
E.g. when the network connection gets interrupted during a web service call, how does the client know whether the web services has processed its action or not?
Can this issue be solved at the business layer only (e.g. to query a previous call state) or are you aware of some nice frameworks/best practices which can help wrapping transactional guarantees around a web service?
Implementing it all by yourself with some kind of transactional context tracked in the business layer is always an option. You can use some compensation mechanisms to ensure transactions are rolled back if needed, but you'll need to:
have the information on transactions persisted somewhere
use transaction correlation IDs, so you can query when the response has
been lost (having correlation IDs is good idea anyway)
implement the operations needed to read/write/rollback, etc, so it might make your services a bit more complex
Another option I can think of is If you're using SOAP you can go for asynchronous communication and look for some stack implementing WS-Coordination, WS-AtomicTransaction and WS-BusinessActivity specifications, then decide for yourself if it is a good idea in your context or not. For example, I think Axis2 supports these, but of course eventually it depends on technologies and stack you use.
From the article above:
WS-AtomicTransaction defines a coordination type that is most useful
for handling system-generated exceptions, such as an incomplete write
operation or a process terminating abnormally.
Below are the types of 2-Phase Commit that it implements.
Hope this helps!

SOAP Pooling Advantages / Disadvantages

I am doing some research on SOAP, for a personal project, and I came across a website with a list of pros and cons for using SOAP, and I understood what most of them meant, except for this one under disadvantages:
SOAP is typically limited to pooling, and not event notifications, when leveraging HTTP for transport. What's more, only one client can use the services of one server in typical situations.
From my understanding of pooling, there should be no issue pooling a SOAP Object for re usability. Pooling is simply a way to use the same resources over and over again, like a connection to a database. Also not entirely certain on the context of Event Notifications.
So my two questions here are, what does the above block quoted text actually mean, and is this information correct?
Website: http://searchsoa.techtarget.com/definition/SOAP
SOAP is RPC, and in RPC some local client invokes a method on some remote target and receives a result. That's how it works, so SOAP works that way too. A client invokes a service asking for something and the service just responds.
If you want "events" in this type of communication the most simple approach is to invoke the service more often (i.e. polling). This has the advantage that nothing changes for the server or the client. It's the same RPC call but done more frequently.
These days everyone is connected to the web and everyone is subscribed to all sorts of services. They want to get notified as soon as something happens to the world around them. Pooling becomes inefficient in this sea of users and services because you are wasting resources. You might poll a service a hundred times just to get back one notification. For this reason technology is evolving so that resource use is minimized. And the direction this is moving to is push services.
Now almost everything happens in the browser. Every browser manufacturer rushes to implement the latest technology changes and HTML5 spec. This means actual pages that push notifications to users instead of faking it with Ajax, comet, etc.
SOAP has been around since 1998 and it's not moving as fast as the rest of the web, mainly because SOAP is mostly an enterprise player and because it's a protocol. Because it's a protocol you have to make new technology available to it without breaking that protocol. Things move slower so people have abandoned SOAP in favor of other ways of doing server-client communication.
SOAP is typically limited to pooling, and not event notifications...
That is correct. But be aware that "typically" does not mean "always".
You can have events, but it's harder. It involves using WS-* specifications like WS-Eventing and WS-Addressing. This is a change in the way SOAP clients operate because a client now becomes some sort of a service too because it needs to receive calls too, not just initiate them. If your technology stack implements these specifications then good for you, but if it doesn't, then you have to build it yourself and it's a real pain.
So for these reasons, if you don't have blocking performance or resource usage issues, you "typically" chose doing polling with SOAP and not event notifications.

Consultant designed a system that uses email as a webservice

I am looking for some solid arguements against a solution supplied where a public facing webserver hosts an aspx form and based on user input places the content of the form in XML in an email body and sends it to an email address only used for this solution. Then an internal system behind the company firewall reads the XML after retrieving the email from the email server and processes from there. I dont think this will be a robust solution and concerned about maintaining it so would just rather replace it now but there is pressure to keep solution.
Thanks
You mostly can't judge an architectural solution without knowing the specific constraints.
Under certain constraints, this may be very well be the best solution.
Let's take a look at the weak points first:
Messages may be lost due to the mail service not available.
Messages may be too big for the mail service. (In my corp we have a limit of 10Mb for instance.)
Messages may be corrupted on transfer. (Mail service may apply virus scanners and boast with this fact, add footers, rename attachments etc.)
Mail system may not cope with the additional burden, if the message traffic is too big.
Order of delivery is not guaranteed.
The solution is somewhat non-conventional.
Security and other non-func may be not fulfilled.
On the other hand:
This is probably a (very pragmatic) implementation of asynchronous messaging. Asynchronous messaging is often much more powerful and reliably than synchronous solutions.
This solution uses an already existing infrastructure.
Mail system does normally not lose messages "just so". So we basically have a reliable persistent message storage here.
Mail systems are often considered to be "mission critical" so they're often built highly reliable and redundant. So using the mail service may be actually more reliable than introducing a new software/hardware component.
And cheaper.
Can be tested with very pragmatic means.
E-mail has good library support.
You don't need an expensive professional for implementation.
So imagine the following constraints:
Build an asynchronous message processing.
Losing small percentage of messages is not a big deal.
Do it fast.
Do it cheap.
Quick and dirty is OK ("we'll throw it away in three month anyway").
Under these constraints this might be a very good and pragmatic solution.
To address the point by #techtrek:
"far more robust" - see above, mail system may actually be more reliable than an internal ESB infrastructure. At least this is my experience.
Agree, but not THAT risky. Attachments are normally NOT damaged in anyway. Otherwise management would scream every time their PowerPoint slides get corrupted.
Emails service down - well, ESB or any internal service may go down as well.
I don't quite understand why e-mail traceability is more complicated. I send an email, it either arrives or not. If not then this is a problem of the mail service. "Complicating" compared to what?
Well, of course mail service administration is separate, why is this a maintenance headache? We actually have all the platform services (databases, servers, ESB, etc.) administered and maintained by separate teams. This is a normal practice, I don't see why it should be a problem here. On the contrary, with the mail service you probably have a professional team specifically dedicated to the reliability of that transport channel.
Frankly, I saw quite a few ESB/MQ-solutions where I really thought that it would be MUCH cheaper, easier and in the fact more reliable if a few distinct apps would just send each other e-mails.
The issue with using email as the relay agent is that:
Creating a webservice that allows your company's internal system to directly intercept and parse XML system seems far more robust to me.
Encapsulating the XML mime type within a transmission protocol (Email) is risky in and of itself.
As a result of (2), there are two points of failure (corruption in the xml conversion process) and also the risk of the email service going down.
Beyond just points of failure, you're also complicating traceability by an order of magnitude.
Email administration is often separate to the administration of the web-service. Unless there are real, legitimate reaosns for this, it just sounds like more maintenance headache?
I agree with what has been said, especially on point 4. Application and eMail Maintenance might be disconnected entities.
Another aspect to consider is the possibility ti send mail from anywhere to the backend and flood it this way

Why is it bad programming to use a stateful webservice and why would it be allowed?

I have a need for a stateful webservice in our organization. However, everywhere I read online says that building a stateful webservice is bad programming but nothing ever says why. I guess I don't understand what is so bad about it. I also don't really understand why they would give a work around to allow you to have state in a webservice.
So I guess that my question is, why is it bad programming to use a stateful webservice and why would it be allowed?
The whole purpose of a web service is to deliver a piece of functionality in one transaction in a way that is highly scalable. This means keeping things simple and atomic.
When you have to make multiple calls to perform the operation, you have a great potential of leaving transactions hanging. Is the client coming back? Are they done? How long should the transaction remain open? Did they crash? How should rollbacks be handled?
The answers to these questions could have a radical impact on the resources necessary to run your service. Which is why everyone recommends doing it all in one swoop.
Here are some reasons I can think of:
The cost of maintaining state will have to be borne server side only - service consumers are rarely web browsers, so have no cookies. This brings down your server performance and increases your design complexity.
A service consumer is an intelligent program, rather than a dumb browser. As such the program will (almost always) maintain its own state. In other words, when you provide a service, your consumer will request precisely the data it wants. Maintaining state on the server becomes obsolete and unnecessary.
Transactions - a service is a dangling point in your system because its clients are mostly intelligent, and they decide when to inform you of changes in their state. This means that if you maintain state, you might have to wait between service calls to finish a transactional operation. And there's absolutely no guarantee the client will ever make that next service call.
There are a lot of reasons, but these are the ones I can think of off the top of my head :)
I think it is a kind of myth
If google can make their stateful web application scalable, then why cant we scale a stateful webservice. It is all about the app server which reduces the scalability.
Even with a website or webservice, ultimate aim is to serve better. If a "stateful" is to improve your service, then don't hesitate to go with that.

Best messaging medium for real-time SOA applications?

I'm working on a real time application implemented using in a SOA-style (read loosely coupled components connected via some messaging protocol - JMS, MQ or HTTP).
The architect who designed this system opted to use JMS to connect the components. This system is real time so there no need to queue up messages should one component fail (the transaction will simply time out). Further, there is no need for guaranteed delivery or rollback.
In this instance, is there any benefit to using JMS over something like an HTTP web service (speed, resource footprint, etc)?
One thing that I'm thinking is since the JMS approach requires us to set a thread pool size (the number of components listening to a JMS topic/queue), wouldn't a HTTP service be a better fit since this additional configuration is not needed (a new thread is created for each HTTP request making the application scalable to an "unlimited" number of requests until the server runs out of resources).
Am I missing something?
I don't disagree with the points made by S.Lott at all, but here are a couple of points to consider regarding HTTP web services:
Your clients only need to know how to communicate via HTTP - a protocol well supported by just about every modern langauge in one form or another. JMS, though popular, is more specialist than HTTP, and so restricts the languages your interconnected systems can use. Perhaps not an issue for your system at the moment, but will you need to plug in other systems later that might struggle to support JMS connectivity?
Standards like WSDL and SOAP which you could levarage for your services are well supported by many langauges and there are plenty of tools around that will generate code to implement both ends of the pipeline (client and server) for you from a WSDL file, reducing the amount of dev you'll have to do. These standards also make it relatively simple to define and publish the specification of the data you'll be passing between your systems, something you'll presumably have to do by hand using a queueing technology like JMS.
On the downside, as pointed out by S.Lott, JMS gives you functionality that you throw away using the (stateless) HTTP protocol: guaranteed ordering & reliability; monitoring; scalability; etc. Are you sure you don't need these, and won't need these going forward?
Great question, btw.
I think it's really dependent on the situation. Where I work, we support Remoting, JMS, MQ, HTTP, and sFTP. We are implementing a middleware appliance that speaks Remoting, JMS, MQ, and HTTP, and a software middleware component that speaks JMS, MQ, and HTTP.
As sgreeve alluded to above, standards help us become flexible, but proprietary formats allow more functionality.
In a nutshell, I'd say use HTTP for stateless calls (which could end up meeting almost all of your needs), and whatever proprietary formats you need for stateful calls. If you work in a big enterprise, a hardware appliance is usually a great fit as middleware: Lightning fast compression, encryption, transformation, and translation, with very low total cost of ownership.
I don't know enough about your requirements, but you may be overlooking Manageability, Flexibility and Performance.
JMS allows you to monitor and manage the queue. These are features HTTP lacks, and you'd have to build rather than buy from a vendor.
Also, There are queues and topics in JMS, allowing multiple subscribers to a single publisher. Not possible in HTTP.
While you may not need those things in release 1.0, you might want them in the future.
Also, JMS may be able to use other transport mechanisms like named sockets, which reduces the overheads if there isn't all that socket negotiation going on with (almost) every request.
If you go down the HTTP route and you want to support more than one machine or some kind of reliability - you are going to need a load balancer capable of discovering the available web servers and loading requests across them - then failing over to another web server if a particular box/process dies. Clients making HTTP requests are also going to have to deal with servers failing and retrying operations in some loop.
This is one of the main features of a message queue - reliable load balancing with failover and loose coupling among the producers and consumers without them having to include retry logic - so your client or server code doesn't have to worry about this kinda thing. This is totally separate to whether or not you want message persistence or want to use ACID transactions to produce/consume messages (which can be very handy BTW).
If you focus just on the server side using Java - whether Servlets or MessageListener/MDBs they are kinda similar either way really. The difference is the load balancer.
So maybe the question should really be - is a JMS broker easier to setup & work with than setting up your DNS/NAT/IP/HTTP load balancer infrastructure?
I suppose it depends on what you mean by real-time... Neither JMS nor HTTP in my opinion support "real-time" applications well, meaning they cannot offer predictable/deterministic performance nor properly prioritize flows in the presence of contention.
Part of it is that these technologies are built on top of TCP which serializes all traffic into a single FIFO meaning that different traffic flows cannot be easily prioritized. Moreover TCP timers are not easily controlled resulting unpredictable blocking and timeouts... For this reason many streaming applications use UDP instead of TCP as an underlying protocol.
Another problem with JMS is that typical implementations use a broker that centralizes message dispatch. This is not the best architecture to get deterministic performance.
If you are looking for a middleware that can offer you the kind of reliability guarantees and publish-subscribe semantics you get with JMS, but was developed to fit the real-time application domain I recommend you take a look at the OMG Data-Distribution Service (DDS). See dds.omg.org and this article I wrote arguing why DDS is the best middleware to implement a real-time SOA. http://soa.sys-con.com/node/467488