rxjava synchronize on one observable - concurrency

On each network request I have logic to do authorization if needed and repeat initial request. Operator retryWhen used for this. Simplistically, it looks like this:
doRequestObservable().retryWhen(getLoginObservable()).subcribe(...)
But when there are a lot of parallel requests - a lot of parallel authorizations sends on server.
How I can prevent multiply authorization in such case?
In right way only for first failed request must be done authorization, next failed request must repated without authorization request.
I can add synchronization in getLoginObservable() method, but may be there are some better method?

Related

Django(2.11) simultaneous (within 10ms) identical HTTP requests

Consider a POST/PUT REST API (using DRF).
If the server receives request1 and within a couple of ms request2 with identical everything to request1 (duplicate request), is there a way to avoid the request2 to be executed using some Django way? Or Should I deal with it manually by some state?
Any inputs would be much appreciated.
There isn't anything out of the box so you would need to write something your self potentially a piece of custom middleware (https://docs.djangoproject.com/en/3.0/topics/http/middleware/) would be best as then it would run over all of the requests. You would need to capture and exam the requests so you'd need a fast storage of some sort such as a memory store.
You could also look into the python asynco library - https://docs.python.org/3/library/asyncio-sync.html
Another possible solution would be using a FIFO message queue which is configured to support de-duplication based on content. This would turn the request into an deferred process though so it may not be suitable for your needs.

What is the correct way to handle multiple levels of network requests?

I have a service which accepts HTTP requests from a customer site. The service then sends an HTTP request to a transactional email provider with information provided in the initial request to the service. The workflow looks like this:
CustomerSite ⟷ EmailService ⟷ TransactionEmailProvider
I can think of two possibilities for handling requests so that errors from the TransactionalEmailProvider can be reported to the CustomerSite.
The EmailService immediately sends an asynchronous request to
TransactionalEmailProvider when it receives a request from a
CustomerSite. The EmailService immediately responds to the
CustomerSite with a success code if the request was properly
formed. If a failure happened when sending a request to the
TransactionalEmailProvider, the EmailService sends a failure
notification using a POST request back to the EmailService using a
webhook implementation.
The EmailService sends a request to the TransactionalEmailProvider, and awaits a response before responding to the CustomerSite request with either a success or a failure.
Right now I'm implementing the first version because I don't want the responsiveness of the EmailService to be dependent on the responsiveness of the TransactionalEmailProvider.
Is this a reasonable way to process HTTP requests that are dependent upon a second level of HTTP requests? Are there situations in which one would be preferred over the other?
Is this a reasonable way to process HTTP requests that are dependent upon a second level of HTTP requests? Are there situations in which one would be preferred over the other?
It really depends on the system requirements, it depends on how you want to behave in case of failure of some of its components or under varying workload.
If you want your system to be reactive or scalable you should use asynchronous requests whenever possible. For this your system should be message driven. You could read more about reactive system here. This seems like your first option.
If you want a simpler system then use synchronous/blocking requests, like your option no. 2

Implementing an asynchronous web API

We are developing a web API which processes potentially very large amounts of user-submitted content, which means that calls to our endpoints might not return immediate results. We are therefore looking at implementing an asynchronous/non-blocking API. Currently our plan is to have the user submit their content via:
POST /v1/foo
The JSON response body contains a unique request ID (a UUID), which the user then submits as a parameter in subsequent polling GETs on the same endpoint:
GET /v1/foo?request_id=<some-uuid>
If the job is finished the result is returned as JSON, otherwise a status update is returned (again JSON).
(Unless they fail both the above calls simply return a "200 OK" response.)
Is this a reasonable way of implementing an asynchronous API? If not what is the 'right' (and RESTful) way of doing this? The model described here recommends creating a temporary status update resource and then a final result resource, but that seems unnecessarily complicated to me.
Actucally the way described in the blog post you mentioned is the 'right' RESTful way of processing aysnchronous operations. I've implemented an API that handles large file uploads and conversion and does it this way. In my opinion this is not over complicated and definitely better then delaying the response to the client or something.
Some additional note: If a task has failed, I would also return 200 OK together with a representation of the task resource and the information that the resource creation has failed.

Auditing Jetty Client requests and responses

I have a requirement to count the jetty transactions and measure the time it took to process the request and get back the response using JMX for our monitoring system.
I am using Jetty 8.1.7 and I can’t seem to find a proper way to do this. I basically need to identify when request is sent (due to Jetty Async approach this is triggered from thread A) and when the response is complete (as the oncompleteResponse is done in another thread).
I usually use ThreadLocal for such state in other areas I need similar functionality, but obviously this won’t work here.
Any ideas how to overcome?
To use jetty's async requests you basically have to subclass ContentExchange and override its methods. So you can add an extra field to it which would contain a timestamp of when the request was sent, and use it later in your onResponseComplete() method to measure the processing time. If you need to know the time when your request was actually sent to the server instead of when it was created you can override the onRequestCommitted() and onRequestComplete() methods.

RESTful way to trigger server-side events

I have a situation where I need my API to have a call for triggering a service-side event, no information (besides authentication) is needed from the client, and nothing needs to be returned by the server. Since this doesn't fit well into the standard CRUD/Resource interaction, should I take this as an indicator that I'm doing something wrong, or is there a RESTful design pattern to deal with these conditions?
Your client can just:
POST /trigger
To which the server would respond with a 202 Accepted.
That way your request can still contain the appropriate authentication headers, and the API can be extended in the future if you need the client to supply an entity, or need to return a response with information about how to query the event status.
There's nothing "non-RESTful" about what you're trying to do here; REST principles don't have to correlate to CRUD operations on resources.
The spec for 202 says:
The entity returned with this response SHOULD include an indication of
the request's current status and either a pointer to a status monitor
or some estimate of when the user can expect the request to be
fulfilled.
You aren't obliged to send anything in the response, given the "SHOULD" in the definition.
REST defines the nature of the communication between the client and server. In this case, I think the issues is there is no information to transfer.
Is there any reason the client needs to initiate this at all? I'd say your server-side event should be entirely self-contained within the server. Perhaps kick it off periodically with a cron call?