I have a C++ singleton that runs as a separate thread. This singleton is derived from a base class provided by a library and it overrides the method onLogon(...). The onLogon method is synchronous, it wants to know right away if we accept the logon attempt.
Problem is we need to pass the logon information via message to a security server. We can register a callback with the security server listener (a separate thread) to get the results of the logon authentication message we sent. My question is how do I block in the onLogon method such that I can have the thread woken up by the callback I've registered with the security server listener thread, and how can I then access the response returned from the security server in a thread-safe way (ie, I need to be able to handle multiple concurrent logon requests).
I'm totally stumped.
Use an empty semaphore. After you send the credentials to the security server, take the semaphore. Since it will be empty it will block execution. Then have the callback function post to the semaphore. That will then resume execution on the original thread.
Since callbacks typically allow an anonymous value to be passed as a parameter, you can register a pointer to a data structure that can be filled with the response.
I ended up going with boost::promise and boost::unique_future. It's was perfect for what I needed.
http://www.boost.org/doc/libs/1_43_0/doc/html/thread/synchronization.html#thread.synchronization.futures
Related
I have an app which sends off HTTP requests and processes the received response. The main thread is blocked until a response comes back, else we couldn't process the data. To send these requests, the user must be authenticated. I wish to catch a 401 response and before returning the response for processing by my app, prompt the user for authentication. Depending on the success, I want to retry to send the original request and return that response instead, or, if authentication fails, return the original 401 response.
I'm using the C++ REST SDK to send HTTP requests. These happen in another thread (pplx::task). I'm also using MFC modal dialog to prompt for authentication. Some of you may see the deadlock that occurs. If not, let me explain more.
The main thread waits for the HTTP request to complete. Inside that thread, I catch a 401 and wish to launch a dialog. To do so, I use a boost::signal. This signal calls SendMessage to the handle I wish to display the dialog. After the message is processed by the MFC message loop, it will launch the dialog (on the main thread). This relies on the MFC message loop, which is blocked waiting for the HTTP request. In short, the main thread is already waiting for the request to finish so it can't run its message loop to receive the call from SendMessage.
Main thread is waiting on worker thread. Worker thread needs to launch a dialog on main thread before it can continue. Deadlock. Does anyone have any clever solutions for ways around this?
I think the simplist solution here is to redesign the way you are handling your threads.
I suggest that instead of having a single thread for requests, you spawn a new thread for each request and then have it return the status code (no matter what it is), you can then handle any logic for retrying with authentication within your main thread (i.e display an authentication dialog, then respawn the authentication thread with the credentials).
This also allows you to encapsulate your request handler better, which is a big plus. In order to properly encapsulate this logic (so you dont have to check every request) you should define some kind of request handler (either a class or a function). For example
StatusCode make_reqeust(...) {
// Deal with the logic on authentication here
}
Where StatusCode is a type for a HTTP status code.
Of course this doesnt solve the issue of your UI thread potentially waiting for your worker thread to finish, so you also need some kind of UI refresh method which is called every x amount of time and which checks the status of all the worker threads (i.e by checking the returned std::future's). You would also want to change my above example to possibly spawn a seperate thread and return an std::future in this case.
With ActorPublisher deprecated in favor of GraphStage, it looks as though I have to give up my actor-managed state for GraphStateLogic-managed state. But with the actor managed state I was able to mutate state by sending arbitrary messages to my actor and with GraphStateLogic I don't see how to do that.
So previously if I wanted to create a Source to expose data that is made available via HTTP request/response, then with ActorPublisher demand was communicated to my actor by Request messages to which I could react by kicking off an HTTP request in the background and send responses to my actor so I could send its contents downstream.
It is not obvious how to do this with a GraphStageLogic instance if I cannot send it arbitrary messages. Demand is communicated by OnPull() to which I can react by kicking off an HTTP request in the background. But then when the response comes in, how do I safely mutate the GraphStateLogic's state?
(aside: just in case it matters, I'm using Akka.Net, but I believe this applies to the whole Akka streams model. I assume the solution in Akka is also the solution in Akka.Net. I also assume that ActorPublisher will also be deprecated in Akka.Net eventually even though it is not at the moment.)
I believe that the question is referring to "asynchronous side-channels" and is discussed here:
http://doc.akka.io/docs/akka/2.5.3/scala/stream/stream-customize.html#using-asynchronous-side-channels.
Using asynchronous side-channels
In order to receive asynchronous events that are not arriving as stream elements (for example a completion of a future or a callback from a 3rd party API) one must acquire a AsyncCallback by calling getAsyncCallback() from the stage logic. The method getAsyncCallback takes as a parameter a callback that will be called once the asynchronous event fires.
My application has very high CPU load average, the reason is jetty start lots of thread to handle request, and may block on data, when data ready, lots of thread become runnable. I want to make jetty wait until all data read, and then start thread to invoke servlet, in that case servlet will never be blocked.
Is is possible?
Not possible.
Jetty needs a thread to either read the request content body itself (for things like mime multitpart, form parameters, etc), or to use that thread to dispatch to your webapp for your Servlet to read the request content body.
Then there is the added ability of Async I/O (introduced in Servlet 3.1) that allows you to write a Servlet that only uses a thread when it can either read or write to the socket, letting the thread fall back to the ThreadPool if neither can be done.
The use case is this:
An Actor is bind to spray IO - receiving and handling all inbound HTTP requests coming through a specified port.
For each inbound request the actor needs to send an outbound asynchronous http request to a different external endpoint, get back an inbound response and send a response back to originating party.
Using spray's client sendReceive returns a future. This means the actor will continue to handle the next inbound message on it's mailbox without waiting for a response of the outbound request it just sent, in the same time the response for the outbound request might arrive and execute on the Future callback, since it is not queued on the actor's mailbox it might be executed in parallel breaking the idea of an actor being executed by only one thread in a given time.
I wonder how this use case can be handled without breaking the actor thread encapsulation, how can an actor make use of spray-client (for sending/receiving asynchronous http events) in an actor safe way?
It is perfectly safe to complete with the future, not the actual value in spray-routing, so for instance, you can do the following:
get {
comlete {
val resultFuture: Future[Result] = ...
val resultFuture.onComplete {....}
resultFuture
}
}
Of course, you will need to make sure that you handle timeouts and error conditions as well.
The question is which thread executes the callback, if it is not queued on the actor's mailbox it could be a parallel execution to the actor receive handling, which might break its thread encapsulation...
To my understanding, there is the same issue with akka actor 'ask' method which returns a Future, they provide a warning not to execute operations on the actor's mutable state from within the callback since it may cause synchronization problems. see: http://doc.akka.io/docs/akka/snapshot/scala/actors.html
"Warning:
When using future callbacks, such as onComplete, onSuccess, and onFailure, inside actors you need to carefully avoid closing over the containing actor’s reference, i.e. do not call methods or access mutable state on the enclosing actor from within the callback. This would break the actor encapsulation and may introduce synchronization bugs and race conditions because the callback will be scheduled concurrently to the enclosing actor. Unfortunately there is not yet a way to detect these illegal accesses at compile time."
For object pools, we say that whenever client asks for a resource, we give it from the pool. If I checked out one resource and changed its state and checked it in. What happens on the next request, does the pool let client check out this resource or this resource is invalid for the pool now?
If the object released to the pool became invalid for re-use, the pool would be somewhat pointless. If a class requires initialization, or re-initialization, you could do it in the get() or release() pool methods. If reinitialization requires much more than simple assignments, (eg. a pool of socket objects that must not be re-used for 5 minutes), then you may have to resort to a dedicated pool manager thread that effectively splits the pool into a couple of puddles - those objects available for re-use and those awaiting reinitialization.
Rgds,
Martin
Or, alternatively, you should not return the resource back to the pool until the resource is back to its original state. For example, imagine you have a web server with a listener thread and a pool of 10 worker threads. The listener thread accepts incoming http requests and dispatches them to the worker threads for processing. Worker threads in the pool (not checked-out) are in their "original" state, i.e. idle, or not processing a request. Once the listener thread checks out a worker thread and gives it an http request, the worker thread begins processing the request; in other words, its state is "working". Once it's done processing the request and has sent the http reply to the client, it is "idle" again and goes back into the pool. Thereby, all threads not currently checked out of the pool are always in their original state, "idle".