Webservice caching reverse proxy? - web-services

I'd like to put some kind of caching reverse proxy in front of a SOAP webservice
over HTTP to improve both performance and availability.
Is there some software that
performs this? (Preferably free and easy to install/use).
The idea is here: the responses of the webservice vary with the request, but
for each request the responses rarely change. So the proxy could
store the responses for each request for some time, and give the cached
response when the same request is sent again. There is only a limited number
of different requests.
The proxy does not need to parse and understand the request or response.
But it does need to understand HTTP POSTs and, say, construct a hash
of the request in order to find the correct response. Caching by the URL,
as done normally in HTTP Proxies, does not help here.
(Of course one can cache the webservice's results in the application
that calls the webservice, but I am looking for a solution that is
standalone, independent from the application.)

Try Ventus Proxy For Webservices. it does exactly what you need.
http://www.ventusproxy.com

I'm not sure if it works with SOAP or not, but check out Varnish. It's a very powerful cache/reverse proxy.

Related

Why dont we have GET call in SOAP?

Why don't we have a GET call in SOAP?
We only send POST requests with SOAP.Why..?
In RESTful APIs, GET, POST, etc. are part of the "method call," so to speak.
However, in SOAP all of the information about the method call is specified in XML.
POST is just more practical for transmittng XML objects in the body of a HTTP request or response. The query string in a GET request would be awkward and has limitations.
However, SOAP 1.2 supports GET for certain requests. This means you can take advantage of caching responses.
SOAP is also not bound to any underlying transport architecture (like HTTP). That means it could be used on top of SMTP for example.
See the section on http binding for more info: https://www.w3.org/TR/soap12-part2/#soapinhttp

Difference between REST call and URL

I have been into web development from sometime. But recently came across an old technology, REST. I read various places about REST calls, what I have understood about REST service is,
REST service responds back with JSON or XML data, which can be used on client side for rendering the DOM elements.
It enhances the use of HTTP protocol.
The URL difference between a REST call and normal URL is:
REST CALL: wwww.xyz.com/getCart/12
URL: wwww.xyz.com/getCart.php?cartId=12
I got the basic difference, hitting the URL would render a page at the server end and would return the response, whereas making an AJAX Call to the REST service would simply return a JSON or a XML output which can be parsed at the client end.
My question is:
If I make my .php page to render a JSON string, and the application makes a AJAX call to the php page to get the JSON response back and use it on client side to render the DOM, then what is the difference between REST call and a normal URL call.?
How REST calls are configured differently from normal URLs?
There's a lot of misinformation and confusion about REST. I'm not surprised that these three points are what you understood from the information available, but they are wrong.
REST isn't coupled to any particular data format or media type. The most important constraint in REST is the emphasis on an uniform interface, which means in this case that the server should be able to respond with whatever data format or media type the clients accept. Under HTTP, the client will tell what formats it can understand through the Accept header, and the server should comply or fail with a 406 Not Acceptable error.
In the same way, REST isn't coupled to any particular protocol, although it's often convoluted with HTTP. Again, following the uniform interface, the clients should be able to follow any links provided by the server, for any protocol with a valid URI scheme.
The semantics of URLs are completely irrelevant to REST. All that matters to REST is that an URL identifies one and only one resource. The URL is an atomic identifier and the client shouldn't rely on any semantics embedded in it for any operations. The two examples you give are both valid in REST. There's nothing more or less RESTful about any of them.
To answer your question, under a REST application the difference you imagine doesn't exist. Hitting an URL will return a response. If the client is requesting with an Accept: text/html header, it may return the human-friendly html page to be rendered by a browser. If the client requests with an Accept: application/json or Accept: application/xml, it may return a machine-friendly format to be read by another application.
REST is just an architectural style, there is no technical difference.
One of the things that REST defines is that your URL needs to be atomic identifiers that refer to only one resource.
GET /users/:id (return the user with the given :id)
PUT /users/:id (update the user with the given :id)
Here is an answer about using a framework to make a REST API in php.
Rest puts more emphasis on the verbs, like GET, PUT, POST... You can call one method like
/api/Customers
and depending on the verb you use it will do a get, post, put or delete. You can also make more easy URL's like
/api/Customers/{id}/Orders/{id}
instead of making a method that would be
api/GetCustomersOrders?id=x&id=y.
All Web Services are APIs, but not all APIs are Web services.
APIs are application interfaces, meaning that one application is able to interact with another application in a standardized way.
Web services are a type of API, which must be accessed through a network connection.
REST APIs are a standardized architecture for building web APIs using HTTP methods.

Discriminating between infrastructure and business logic when using HTTP status codes

We are trying to build a REST interface that allows users to test the existence of a specific resource. Let's assume we're selling domain names: the user needs to determine if the domain is available.
An HTTP GET combined with 200 and 404 response codes seems sensible at first glance.
The problem we have is discriminating between a request successfully served by our lookup service, and a request served under exceptional behaviour from other components. For example:
404 and 200 can be returned by intermediary proxies that actually block the request. This can be due to proxy misconfiguration, or even external infrastructure such as coffee shop Wifi using poor forms-based authentication.
Clients could be using broken URLs. This could occur through deprecation or (again) by misconfiguration. We could combat the former through 301, however.
What is the current best practice for discriminating between responses that have been successfully fulfilled against the client's intention for that request, and responses served through exceptional behaviour?
The problem is eliminated by tunnelling responses through the response body, as we can ensure these are unique to our service. However, doesn't seem very RESTful!
Simply have your application add some content to its HTTP responses that will distinguish them from the responses thrown by intermediaries. Any or all of these would work:
Information about the error in the response content that is recognizable as your application's content (for example, Application error: Domain name not found (404))
A Content-Type header in the response that indicates that the response content should be decoded as an application error (for example, Content-Type: application/vnd.domain-finder.error+json)
A custom header in the response that indicates it is an application error
Once you implement a scheme like this, your API clients will need to be aware of the mechanism you choose if they want to react differently to application errors versus infrastructure errors, so just document it clearly.
I tend to follow the "do what's RESTful as long as it makes sense" line of thinking.
Let's say you have an API that looks like this:
/api/v1/domains/<name>/
Hitting /api/v1/domain/exists.com/ could then return a 200 with some whois information.
Hitting /api/v1/domain/doesnt.com/ could return a 404 with links to purchase options.
That would probably work. If the returned content follows a strict format (e.g. a JSON response with a results key) then your API's responses can be differentiated from your proxies' responses.
Alternatively, you could offer
/api/v1/domains/?search=maybe
/api/v1/domains/?lookup=maybe.com
This is now slightly less RESTful but it's still self-describing and (in my opinion) not that bad. Now every response can be a 200 and your content can reveal the results.

Consume REST service that returns a single value

I am used to consuming Web services via a XMLHttpRequest, to retrieve xml or JSON.
Recently, I have been working with SharePoint REST services, which can return a single value (for example 5532, or "Jeff"). I am wondering if there is a more efficient way than XMLHttpRequest to retrieve this single value. For example, would it work if I loaded the REST url via an iframe, then retrieved the iframe content? Or is there any other well established method?
[Edit] By single value, I really mean that the service just returns these characters. This is not even presented in a JSON or xml response.
Any inefficiency in XMLHttpRequest is largely due to the overhead of HTTP, which the iframe approach is going to incur, as well. Furthermore, if the Sharepoint service expects to speak HTTP, you're going to need to speak HTTP. However, an API does not have to run over HTTP to be RESTful, per Roy Fielding, so if the service provided an API over a raw socket -- or if you simply wanted to craft your own slimmer HTTP request -- you could use a Flash socket via a library like: http://code.google.com/p/javascript-as3-socket/. You could cut the request message size down to under 100 bytes, and could pull out the response data trivially.
The jQuery library is a well established framework which you can use. It´s also an article which answer your concrete question at StackOverflow.

Should I RESTify my RPC calls over HTTP?

We have HTTP webservices that are RPC. They return XML representing the object the either retrieved or created. I want to know the advantages (if any) of "restifying" the services.
POST http://www.example.com/createDoodad
GET http://www.example.com/getDoodad?id=13
GET http://www.example.com/getWidget?id1=11&id2=45
POST http://www.example.com/createWidget
POST http://www.example.com/createSprocked
One thing I see is that we don't need representations for EVERY resource and we don't need to support all operations (GET, PUT, POST, DELETE) on all resources either.
Basically my question is this.
Convince me that I should be using restful services instead of RPC over HTTP and what would those restful services should be?
For one it's all about semantics, a URI is a Uniform Resource Indicator. HTTP provides methods to GET, POST, PUT, and DELETE a resource. HTTP headers specify in which format I want to recieve or send the information. This is all readily available through the HTTP protocol.
So you could reuse the same URL you use for HTML output to get XML, JSON in a way that HTTP was meant to be used.
XML-RPC and SOAP are based on calling methods that are described by an XSD or WSDL file whilst REST is based on getting/modifying resources. The difference is subtle but apparent. The URL solely describes the resource and not the action as is often the case with SOAP and XML-RPC.
The benefits of REST are that you can utilize HTTP verbs to modify a resource as supposed to a method call that could be named create/new/add, etc. Meaningful HTTP status codes instead of different kinds of error responses and being able to specify different formats on the same resource in a standard way.
You also don't have to accept ALL the verbs on a RESTful resource, for example if you want a read-only resource just return a 405 status code Method Not Allowed on any verb which isn't GET.
Should you redo your RPC calls to REST ? No, I don't think so. The benefits don't outweigh the development time. Should you learn REST when setting up a new Webservice ? Yes, I personally do think so, consuming a REST resource will feel a lot more natural and can grow much more rapidly.
EDIT
Why I feel REST wins over XML-RPC/SOAP is that when developing websites you already aggregate all the neccessary data for the output to HTML, you also write validating code for POST bodies. Why should you change to a different protocol just because the transport markup changes?
This way when you design a new website (language agnostic) if you really think of URI's as resources you basically use your URI's as method calls with the HTTP verb prefixing the method call.
That is, a GET on /products/12 with an HTTP header Accept: application/json; basically (imaginary) translates to getProducts(12,MimeType.Json).
This 'method' then has to do a couple of things
Check if we support JSON as a MIME type. (Validate request)
Validate request data
Aggregate data for product 12.
Format to JSON and return.
If for some reason in the next 4 years YAML is going to be the next big craze and one of your consumers wishes to talk to you in that way this MIME type is plugged in a lot easier than with regular web services.
Now product 12 is a resource you most likely also want to accept HTML MIME types on to display said product, but for a URI like /product/12/reviews/14/ you don't need an HTML counterpart, you just want your consumers to be able to post to that URL to update(PUT)/delete(DELETE) their own review.
In thinking of URIs strictly as resources, not just a location of a web page, and these resources in turn combined with the HTTP request to method invocations on the server side leads to clean (SEO friendly) URLs and (more importantly?) ease of development.
I'm sure there are frameworks in any language that will automatically do the mapping of URIs to method invocations for you. I can't recommend one since I usually roll out my own.
ASP.NET MVC also works on the same principle, but in my opinion it doesn't produce RESTful URIs. ASP.NET MVC makes the verb part of the URI by default having said that it's good to note that by no means does ASP.NET MVC force this (or anything for that matter) upon you.
If you're going to choose a framework at the very least they should:
Bind URI's to methods on the server
Support Object to JSON/XML, etc. serialization. It's a pain if you have to write this yourself although, dependent on the language, not neccessary all too difficult.
Expose some sort of type safe request helpers to help you determine what was requested without parsing the HTTP headers manually.
Try taking the verbs out of your URLs:
POST http://www.example.com/Doodad
GET http://www.example.com/Doodad/13
GET http://www.example.com/Widget/11/45
POST http://www.example.com/Widget
POST http://www.example.com/Sprocked
Query strings shouldn't be used for accessing a resource in a hierarchical, non-query manner. Query strings are usually ignored when caching, which is dangerous and slow.