I am writing a web service API, I have a doubt regarding retry logic in API.
My API calls few other downstream APIs.
Shall I put the retry logic around service calls (downstream APIs) I am making? Or just say to client that 'please retry' and client can have the retry logic?
Your API must have a worst scenario approach, so if your API needs other API's in order to work you should take care of the exceptions and timeouts.
One good approach as you mentioned is to implement a retry logic.
Please refer to this question to implement it. A better approach is to implement it with a Fibonacci approach so you do not call other API's in the same interval.
Also there are some libs out there that already implements retrys
Related
I'm writing an API in WCF 4.6.1. The client(s) will not be written by me, and will not necessarily be in .NET (they could be in any language/platform).
There is a web method which does something that can take a long time, so I want to encourage the client to call it asynchronously. I know that the client can be written to treat the web method as async (threading, etc), but is there a way of "enforcing" the actual web service as an async operation? i.e. Does WSDL have a way to saying "this is an async method"?
Does WSDL have a way to saying "this is an async method"?
No it doesn't. The communication between the client and the service is synchronous even if the client thread does not block while that call is taking place. This is to say the invocation is asynchronous not that the web service method is asynchronous.
If you provide good documentation to say that for a particular operation it's advisable to use a separate thread because the response is slow to be generated you should be OK. Clients need to be built and the integration with the web service tested. The developers will notice the slow response and they will decide if they need to make the call in a non blocking way. Even blocking might be a solution for them, you never know, what you consider slow other might have no issue with.
If you want to "force" clients to not block for the response you could use for example WS-Addressing (I'm assuming here that you are using WCF for a SOAP web service) where your client provides a callback endpoint that you can invoke when the response is ready. This complicates a bit the client since it needs to have a receiving endpoint now. But a client developer might prefer to chose how she invokes the service (in a blocking/non blocking way) as opposed to having to implement the WS-Addressing spec.
I am doing some research on SOAP, for a personal project, and I came across a website with a list of pros and cons for using SOAP, and I understood what most of them meant, except for this one under disadvantages:
SOAP is typically limited to pooling, and not event notifications, when leveraging HTTP for transport. What's more, only one client can use the services of one server in typical situations.
From my understanding of pooling, there should be no issue pooling a SOAP Object for re usability. Pooling is simply a way to use the same resources over and over again, like a connection to a database. Also not entirely certain on the context of Event Notifications.
So my two questions here are, what does the above block quoted text actually mean, and is this information correct?
Website: http://searchsoa.techtarget.com/definition/SOAP
SOAP is RPC, and in RPC some local client invokes a method on some remote target and receives a result. That's how it works, so SOAP works that way too. A client invokes a service asking for something and the service just responds.
If you want "events" in this type of communication the most simple approach is to invoke the service more often (i.e. polling). This has the advantage that nothing changes for the server or the client. It's the same RPC call but done more frequently.
These days everyone is connected to the web and everyone is subscribed to all sorts of services. They want to get notified as soon as something happens to the world around them. Pooling becomes inefficient in this sea of users and services because you are wasting resources. You might poll a service a hundred times just to get back one notification. For this reason technology is evolving so that resource use is minimized. And the direction this is moving to is push services.
Now almost everything happens in the browser. Every browser manufacturer rushes to implement the latest technology changes and HTML5 spec. This means actual pages that push notifications to users instead of faking it with Ajax, comet, etc.
SOAP has been around since 1998 and it's not moving as fast as the rest of the web, mainly because SOAP is mostly an enterprise player and because it's a protocol. Because it's a protocol you have to make new technology available to it without breaking that protocol. Things move slower so people have abandoned SOAP in favor of other ways of doing server-client communication.
SOAP is typically limited to pooling, and not event notifications...
That is correct. But be aware that "typically" does not mean "always".
You can have events, but it's harder. It involves using WS-* specifications like WS-Eventing and WS-Addressing. This is a change in the way SOAP clients operate because a client now becomes some sort of a service too because it needs to receive calls too, not just initiate them. If your technology stack implements these specifications then good for you, but if it doesn't, then you have to build it yourself and it's a real pain.
So for these reasons, if you don't have blocking performance or resource usage issues, you "typically" chose doing polling with SOAP and not event notifications.
Lets say I have web applicatons/services:
API
Set of Applications
API is used for managing some resources (simple CRUD operations). Now what I need is to subscribe Applications for changes of different API resources. Applications would do some background work on a change.
I came up to idea of callbacks. So that Applications can oauthorise and post to the API a callback config.
I think that this config should look like this:
{
'callback_url': 'http://3rdpartyservice.com/callback',
'resources': ['foo1', 'foo2'],
'ref_data': { 'token': 'abcd1234' }
}
resources is array of the resources that 3rd party service is interested in
ref_data is custom json for 3rd party usage (e.g. for auth)
This way on specified resource change the API would send a request to callback_url. This request would contain resource data, action(create/update/delete) and ref_data.
The intention here is to make this generic enough to allow 3rd party clients configure such callbacks.
So the question are:
Are there any best practices?
What about security potential issues?
Are there any real world examples on the web?
Tx
Sounds very similar as WebHooks or Service Hooks.
Check out the Web Hooks on GitHub, to get a good idea what they are and how they work. See also last alinea Service Hooks, as it explains how github handles these WebHooks. This would be similar for your application. The OAuth explains why and how it is done.
See also Webhooks, REST and the Open Web, from API User Experience.
There is even RestHooks.
The general solution to this requirement is usually called "publish/subscribe". There are dozens of solutions to this - google "publish subscribe REST" for some examples. You can also read "Enterprise Integration Patterns".
They key challenge in this kind of solution is "real-time versus queue".
For instance, if you have an API with a million clients, who are all interested in the same event, you cannot guarantee that in real time you can reach all of those clients within whatever timeframe their application demands. You also have to worry about the network going away, or clients being temporarily down. In this case, you application might define an event queue, and clients look in that queue for events they're interested in. Once you go down that route, you're probably going to use some off-the-shelf software rather than building your own. Apache Camel is a good open source implementation.
In your example, for instance, what happens if you cannot reach 3rdpartyservice.com? Or if http://3rdpartyservice.com/callback throws an error when posting an update to foo1, but not to foo2? Or if http://3rdpartyservice.com/ uses a different flavour of OAuth than you're used to? How do you guarantee http://3rdpartyservice.com/ that it's you who is posting an update, not a hacker?
Your choices really tend to come down to your non-functional requirements, rather than the functional ones - things like uptime, guarantee of notification, guarantee of delivery, etc. are more important than the specifics of how you pass across the parameters, and whether it's "resource-based" or some other protocol.
We have a system using EJB 3 Stateless bean which is also exposed as web service.
There's a integration request from other team that want our system to fire a notification to other system after invocation (by web services or other means). Since this is not totally related to our system I would prefer to have this feature loosely coupled with our own system instead of hard coding these features in to our system code.
Is there any feature on EJB or web services that can achieve what I desire? We would require a method level invocation listener so that when the EJB method/ web service get invoked, it can trigger a callback/message so we can do something according to it. I would expect it to be some kind of annotation/configuration for setting up JMS or something.
We are using JBoss as the application server. If there's any JBoss specific solution it's also welcomed.
I would suggest two options:
use JMS. When you mentioned loose coupling, JMS first crossed my mind - you can put a message on some queue/topic after method invocation and let the listener to perform futher actions. JMS messages can carry various kinds of objects - the only request is that class implements Serializable (ObjectMessage#setObject); other advantage is that you can (un)deploy your Stateless bean and other system independently. They can be on different JVMs.
use Interceptors. Technically, they would be invoked before your methods runs, but of course there is always some nice workaround :-) Here is the official documentation about Interceptors, but since you mentioned that you're using JBoss, there can be also found some interesting material on JBoss pages.
I venture that most but not all web services today are synchronous. A fundamental design decision existing if to implement asynchronous processing.
Is there value in implementing a processing queue system for asynchronous web services? It is a MOM/infrastructure decision with which I am toying. Instead of going system-to-system implement a middleware which will broker said transactions. The ease of management and tracking/troubleshooting of a spider web of services seems to make the most sense.
How best have you implemented asynchronous web services?
It is interesting I stumble into this question. I have exactly the same concern with the current project I am developing.
Our web services are develop using TIBCO technology, and they are also synchronous by default. We are considering creating a queue mechanism to process these requests asynchronously; the reason being: the back-end storage technology we have to interface with is notoriously slow (it is an imposed technology, and we have to deal with it)
Personally I am considering creating a 2nd WSDL definition for the asynchronous replies (which can occur from a few seconds to a few hours later than the request, depending on the load on the mentioned back-end storage.) Clients calling our Web Services will have to in turn implement a web service using this "2nd WSDL" to which we act as clients.
I'd be interested in knowing the directions you are exploring.