How to make jetty buffer all request body before invoke servlet - jetty

My application has very high CPU load average, the reason is jetty start lots of thread to handle request, and may block on data, when data ready, lots of thread become runnable. I want to make jetty wait until all data read, and then start thread to invoke servlet, in that case servlet will never be blocked.
Is is possible?

Not possible.
Jetty needs a thread to either read the request content body itself (for things like mime multitpart, form parameters, etc), or to use that thread to dispatch to your webapp for your Servlet to read the request content body.
Then there is the added ability of Async I/O (introduced in Servlet 3.1) that allows you to write a Servlet that only uses a thread when it can either read or write to the socket, letting the thread fall back to the ThreadPool if neither can be done.

Related

Continue request django rest framework

I have a request that lasts more than 3 minutes, I want the request to be sent and immediately give the answer 200 and after the end of the work - give the result
The workflow you've described is called asynchronous task execution.
The main idea is to remove time or resource consuming parts of work from the code that handles HTTP requests and deligate it to some kind of worker. The worker might be a diffrent thread or process or even a separate service that runs on a different server.
This makes your application more responsive, as the users gets the HTTP response much quicker. Also, with this approach you can display such UI-friendly things as progress bars and status marks for the task, create retrial policies if task failes etc.
Example workflow:
user makes HTTP request initiating the task
the server creates the task, adds it to the queue and returns the HTTP response with task_id immediately
the front-end code starts ajax polling to get the results of the task passing task_id
the server handles polling HTTP requests and gets status information for this task_id. It returns the info (whether results or "still waiting") with the HTTP response
the front-end displays spinner if server returns "still waiting" or the results if they are ready
The most popular way to do this in Django is using the celery disctributed task queue.
Suppose a request comes, you will have to verify it. Then send response and use a mechanism to complete the request in the background. You will have to be clear that the request can be completed. You can use pipelining, where you put every task into pipeline, Django-Celery is an option but don't use it unless required. Find easy way to resolve the issue

How to expose an asynchronous api as a custom akka stream Source now that ActorPublisher is deprecated?

With ActorPublisher deprecated in favor of GraphStage, it looks as though I have to give up my actor-managed state for GraphStateLogic-managed state. But with the actor managed state I was able to mutate state by sending arbitrary messages to my actor and with GraphStateLogic I don't see how to do that.
So previously if I wanted to create a Source to expose data that is made available via HTTP request/response, then with ActorPublisher demand was communicated to my actor by Request messages to which I could react by kicking off an HTTP request in the background and send responses to my actor so I could send its contents downstream.
It is not obvious how to do this with a GraphStageLogic instance if I cannot send it arbitrary messages. Demand is communicated by OnPull() to which I can react by kicking off an HTTP request in the background. But then when the response comes in, how do I safely mutate the GraphStateLogic's state?
(aside: just in case it matters, I'm using Akka.Net, but I believe this applies to the whole Akka streams model. I assume the solution in Akka is also the solution in Akka.Net. I also assume that ActorPublisher will also be deprecated in Akka.Net eventually even though it is not at the moment.)
I believe that the question is referring to "asynchronous side-channels" and is discussed here:
http://doc.akka.io/docs/akka/2.5.3/scala/stream/stream-customize.html#using-asynchronous-side-channels.
Using asynchronous side-channels
In order to receive asynchronous events that are not arriving as stream elements (for example a completion of a future or a callback from a 3rd party API) one must acquire a AsyncCallback by calling getAsyncCallback() from the stage logic. The method getAsyncCallback takes as a parameter a callback that will be called once the asynchronous event fires.

Configuring spray-servlet to avoid request bottleneck

I have an application which uses spray-servlet to bootstrap my custom Spray routing Actor via spray.servlet.Initializer. The requests are then handed off to my Actor via spray.servlet.Servlet30ConnectorServlet.
From what I can gather, the Servlet30ConnectorServlet simply retrieves my Actor out of the ServletContext that the Initializer had set when the application started, and hands the HttpServletRequest to my Actor's receive method. This leads me to believe that only one instance of my Actor will have to handle all requests. If my Actor blocks in its receive method, then subsequent requests will queue waiting for it to complete.
Now I realize that I can code my routing Actor to use detach() or a complete that returns a Future, however most of the documentation never alludes to having to do this.
If my above assumption is true (single Actor instance handling all requests), is there a way to configure the Servlet30ConnectorServlet to perhaps load balance the incoming requests amongst multiple instances of my routing Actor instead of just the one? Or is this something I'll have to roll myself by subclassing Servlet30ConnectorServlet?
I did some research and now I understand better how spray-servlet is working. It's not spray-servlet that dictates the strategy for how many Request Handler Actors are created but rather the plumbing code provided with the example I based my application on.
My assumption all along was that spray-servlet would essentially work like a traditional Java EE application dispatcher in a handler-per-request type of fashion (or some reasonable variant of that notion). That is not the case because it is routing the request to an Actor with a mailbox, not some singleton HttpServlet.
I am now delegating the requests to a pool of actors in order to reduce our potential for bottleneck when our system is under load.
val serviceActor = system.actorOf(RoundRobinPool(config.SomeReasonableSize).props(Props(Props[BootServiceActor])), "my-route-actors")
I am still a bit baffled by the fact that the examples and documentation assumes everyone would be writing non-blocking Request Handler Actors under spray. All of their documentation essentially demonstrates non-Future rendering complete, yet there is no mention in their literature that maybe, just maybe, you might want to create a reasonable sized pool of Request Handler Actors to prevent a slew of requests from bottle necking the poor single overworked Actor. Or it's possible I've overlooked it.

Auditing Jetty Client requests and responses

I have a requirement to count the jetty transactions and measure the time it took to process the request and get back the response using JMX for our monitoring system.
I am using Jetty 8.1.7 and I can’t seem to find a proper way to do this. I basically need to identify when request is sent (due to Jetty Async approach this is triggered from thread A) and when the response is complete (as the oncompleteResponse is done in another thread).
I usually use ThreadLocal for such state in other areas I need similar functionality, but obviously this won’t work here.
Any ideas how to overcome?
To use jetty's async requests you basically have to subclass ContentExchange and override its methods. So you can add an extra field to it which would contain a timestamp of when the request was sent, and use it later in your onResponseComplete() method to measure the processing time. If you need to know the time when your request was actually sent to the server instead of when it was created you can override the onRequestCommitted() and onRequestComplete() methods.

C++ Singleton Threading problem

I have a C++ singleton that runs as a separate thread. This singleton is derived from a base class provided by a library and it overrides the method onLogon(...). The onLogon method is synchronous, it wants to know right away if we accept the logon attempt.
Problem is we need to pass the logon information via message to a security server. We can register a callback with the security server listener (a separate thread) to get the results of the logon authentication message we sent. My question is how do I block in the onLogon method such that I can have the thread woken up by the callback I've registered with the security server listener thread, and how can I then access the response returned from the security server in a thread-safe way (ie, I need to be able to handle multiple concurrent logon requests).
I'm totally stumped.
Use an empty semaphore. After you send the credentials to the security server, take the semaphore. Since it will be empty it will block execution. Then have the callback function post to the semaphore. That will then resume execution on the original thread.
Since callbacks typically allow an anonymous value to be passed as a parameter, you can register a pointer to a data structure that can be filled with the response.
I ended up going with boost::promise and boost::unique_future. It's was perfect for what I needed.
http://www.boost.org/doc/libs/1_43_0/doc/html/thread/synchronization.html#thread.synchronization.futures