I have a situation where I need my API to have a call for triggering a service-side event, no information (besides authentication) is needed from the client, and nothing needs to be returned by the server. Since this doesn't fit well into the standard CRUD/Resource interaction, should I take this as an indicator that I'm doing something wrong, or is there a RESTful design pattern to deal with these conditions?
Your client can just:
POST /trigger
To which the server would respond with a 202 Accepted.
That way your request can still contain the appropriate authentication headers, and the API can be extended in the future if you need the client to supply an entity, or need to return a response with information about how to query the event status.
There's nothing "non-RESTful" about what you're trying to do here; REST principles don't have to correlate to CRUD operations on resources.
The spec for 202 says:
The entity returned with this response SHOULD include an indication of
the request's current status and either a pointer to a status monitor
or some estimate of when the user can expect the request to be
fulfilled.
You aren't obliged to send anything in the response, given the "SHOULD" in the definition.
REST defines the nature of the communication between the client and server. In this case, I think the issues is there is no information to transfer.
Is there any reason the client needs to initiate this at all? I'd say your server-side event should be entirely self-contained within the server. Perhaps kick it off periodically with a cron call?
Related
It is best practice to handle such API errors by try and catch or the API response suppose to be like Google, Facebook and Microsoft API call
Google API call example
Facebook API call example
Microsoft API call example
As a general rule, there isn't such a thing as a standard API, so there also isn't a best practice as such either. If you are dealing with multiple APIs within your app, you'll end up having at least a handful of variations in what you check for and how you adapt.
Depending on how terminal the failure is, and where it happens in their processing stack, the HTTP status may be set, and you may also get an HTML, JSON or XML body with more detail (no matter what you thought you might get).
APIs also fail randomly with transient errors, so for your code to work reliably, you probably need a retry loop somewhere.
They also throttle, so some kind of detect/backoff/retry handler would help (details vary per API, as ever).
Psuedocode:
retry loop {
request
check connection (network errors)
check HTTP status code
check body
parse body if valid and extract errors
if terminal failure exit (authentication/authorisation etc)
if throttling backoff
}
I have a service which accepts HTTP requests from a customer site. The service then sends an HTTP request to a transactional email provider with information provided in the initial request to the service. The workflow looks like this:
CustomerSite ⟷ EmailService ⟷ TransactionEmailProvider
I can think of two possibilities for handling requests so that errors from the TransactionalEmailProvider can be reported to the CustomerSite.
The EmailService immediately sends an asynchronous request to
TransactionalEmailProvider when it receives a request from a
CustomerSite. The EmailService immediately responds to the
CustomerSite with a success code if the request was properly
formed. If a failure happened when sending a request to the
TransactionalEmailProvider, the EmailService sends a failure
notification using a POST request back to the EmailService using a
webhook implementation.
The EmailService sends a request to the TransactionalEmailProvider, and awaits a response before responding to the CustomerSite request with either a success or a failure.
Right now I'm implementing the first version because I don't want the responsiveness of the EmailService to be dependent on the responsiveness of the TransactionalEmailProvider.
Is this a reasonable way to process HTTP requests that are dependent upon a second level of HTTP requests? Are there situations in which one would be preferred over the other?
Is this a reasonable way to process HTTP requests that are dependent upon a second level of HTTP requests? Are there situations in which one would be preferred over the other?
It really depends on the system requirements, it depends on how you want to behave in case of failure of some of its components or under varying workload.
If you want your system to be reactive or scalable you should use asynchronous requests whenever possible. For this your system should be message driven. You could read more about reactive system here. This seems like your first option.
If you want a simpler system then use synchronous/blocking requests, like your option no. 2
We are developing a web API which processes potentially very large amounts of user-submitted content, which means that calls to our endpoints might not return immediate results. We are therefore looking at implementing an asynchronous/non-blocking API. Currently our plan is to have the user submit their content via:
POST /v1/foo
The JSON response body contains a unique request ID (a UUID), which the user then submits as a parameter in subsequent polling GETs on the same endpoint:
GET /v1/foo?request_id=<some-uuid>
If the job is finished the result is returned as JSON, otherwise a status update is returned (again JSON).
(Unless they fail both the above calls simply return a "200 OK" response.)
Is this a reasonable way of implementing an asynchronous API? If not what is the 'right' (and RESTful) way of doing this? The model described here recommends creating a temporary status update resource and then a final result resource, but that seems unnecessarily complicated to me.
Actucally the way described in the blog post you mentioned is the 'right' RESTful way of processing aysnchronous operations. I've implemented an API that handles large file uploads and conversion and does it this way. In my opinion this is not over complicated and definitely better then delaying the response to the client or something.
Some additional note: If a task has failed, I would also return 200 OK together with a representation of the task resource and the information that the resource creation has failed.
I have a requirement to count the jetty transactions and measure the time it took to process the request and get back the response using JMX for our monitoring system.
I am using Jetty 8.1.7 and I can’t seem to find a proper way to do this. I basically need to identify when request is sent (due to Jetty Async approach this is triggered from thread A) and when the response is complete (as the oncompleteResponse is done in another thread).
I usually use ThreadLocal for such state in other areas I need similar functionality, but obviously this won’t work here.
Any ideas how to overcome?
To use jetty's async requests you basically have to subclass ContentExchange and override its methods. So you can add an extra field to it which would contain a timestamp of when the request was sent, and use it later in your onResponseComplete() method to measure the processing time. If you need to know the time when your request was actually sent to the server instead of when it was created you can override the onRequestCommitted() and onRequestComplete() methods.
Company A has async pooling based webservice for notifications. Company B checks for notifications. Every time when it reads new notifications A deletes them from the system. Thus subsequent read requests return only new notifications. There is also requirement for the client B to interrupt the connection if there is no response within 30 sec.
This causes one potential problem: Due to unexpected slowness it is possible for A get the request deleted a notification and send the response back while B is already interrupted the connection. Under this scenario notification gets lost. Now one can argue that the core problem lies within operation realm (the HTTP response must be delivered withing 20 sec ) still on practice it is not always feasible.
How to design B (the client) to avoid this problem?
One way I can see is to do not delete the notifications by A and make B be aware of its state, so that it knows starting from what ID it needs to process notifications, but that presumes that ID will be sequential. Which is controlled by A. Even if B defines its own sequence A still has to be altered to return it back.
Are there any other approaches?
Thanks!
Web services in general are unreliable enough that it's rarely a good idea to make a "read" request serve double-duty as a "delete" request, especially without the client's knowledge. There is just too much risk of a connection dropping or timing out. There is no way to get around this only by modifying the client, because it's the server that is at fault here - the way it's designed is fundamentally unsuited for a web service.
I think you're on the right track with the incrementing IDs idea. The client knows (or can be modified to know) which notifications it's received, so if it can supply the ID of the last message it's received when it polls for notifications, the server should be able to respond based on that ID.
It really seems like Company A's webservice should be synchronous instead of asynchronous. If that is not possible, it may be a good idea to send a "ACK"-like response to a new Company A webservice that indicates a specific notification was received (by Company B) and can be deleted.