C++ waiting for response asynchronously, best design? - c++

I have a program with a main loop which must keep running. Sometimes requests will be made to the network so I defer them to a request making service which spawns another thread. What is the best way to act on the eventual response?
My idea is to set a variable when making the request, protect it by a mutex and have the service thread flip the variable when it is finished, with the response. This means I must continually check the variable in the main loop. Is this the best way?
I'm familiar with async programming in Javascript, but there there is only one thread, so a callback can do all the work safely.
Thank you.
EDIT: I'm using C++ 17.

For a lot of traffic I would use a thread to perform the networking, use an "IPC" object containing a mutex, deque and condition-variable.
Use a std::shared_ptr to share the IPC to both main and thread contexts.
When the thread receives the message, it will lock the mutex (use std::lock_guard) and push the message to the deque. Outside the lock, then signal the condition-variable.
The main thread would wait on the condition-variable, when signalled it will then lock the mutex and pop anything from the deque. Note that you use the mutex to protect only the deque.
.
Another approach would be to use a std::async method to receive the message and the main program would wait on it with the get method which will wait until the async method completes.
I'd put the choice largely down to how much networking you are intending to do; if only an occasional "open-send-receive-close" transaction then certainly look at using async.

Related

How do I execute a C++ function asynchronously and not block/wait?

I want to execute a function asynchronously and not wait for it to complete. I initially thought I could use std::async with launch::async, but the returned future's destructor blocks until the function is complete.
Is there a way of running a function on a thread pool using stl without blocking?
You should spawn a single new thread which waits on a counting semaphore. When it is awoken (unblocked), it will send one RPC request and decrement the counter. When the user clicks the button, increment the counter. The same thread can service all requests throughout the program's lifetime.
You're looking for std::thread::detach. http://en.cppreference.com/w/cpp/thread/thread/detach
You can create a thread, and then detach from it. At that point you can delete your thread handle and the thread will run without you.
Incidentally it's usually considered bad form to use this technique. Generally you should care about the state of the thread, and should try to shut it down gracefully at program end, but in practice this is a useful trick for when you really don't care.
This proposal talks about executors... it looks like the kind of thing I was hoping I'd find existed already, but it looks like it doesn't.
http://isocpp.org/files/papers/n4039.html

pthreads: perform function on main thread

I'm looking for the analogous of Cocoa's
-[NSObject performSelectorOnMainThread: withObject: waitUntilDone:]
method.
So basically I have a function that does some work on a separate thread but it must perform some synchronous calls that need to be performed on the main one.
in cocoa, the message is added to the run loop, which is cleared as part of its iteration.
to simulate this:
you'll a want a run loop
an abstract message system
and a reference counting mechanism (in most cases)
a way to add those messages to a run loop for scheduled execution
timers would be a nice addition
to accomplish something similar using pthread interfaces exclusively, start by reading up on conditions pthread_cond_t.
i know of no pthread interface with a 1-1 relationship for what you're trying to accomplish. conditions also operate without run loops, so you may need to bring that to the table, if you do not reuse a run loop implementation. if you use run loops, then you just need a lock to add messages to a thread with a run loop.
pthreads are a very low-level abstraction, so there's no easy way to do this with raw pthreads. Typically you'll want to write to a file descriptor to wake up an event loop on the main thread, then pass it a pointer to the function you want to run. You could even write pointer values onto a pipe(), then have the main thread execute them.
To wait synchronously, you can simply have a mutex and condition variable, plus completion flag on these execution request objects. Have the child thread wait on the mutex/condvar/completion flag, then in the main thread (under the mutex) set the flag and signal the cvar. Cleanup of the request structure would be done in the child.
To be more specific, it'd help if you could mention what event loop you have running on your main thread.

How to implement a timed wait around a blocking call?

So, the situation is this. I've got a C++ library that is doing some interprocess communication, with a wait() function that blocks and waits for an incoming message. The difficulty is that I need a timed wait, which will return with a status value if no message is received in a specified amount of time.
The most elegant solution is probably to rewrite the library to add a timed wait to its API, but for the sake of this question I'll assume it's not feasible. (In actuality, it looks difficult, so I want to know what the other option is.)
Here's how I'd do this with a busy wait loop, in pseudocode:
while(message == false && current_time - start_time < timeout)
{
if (Listener.new_message()) then message = true;
}
I don't want a busy wait that eats processor cycles, though. And I also don't want to just add a sleep() call in the loop to avoid processor load, as that means slower response. I want something that does this with a proper sort of blocks and interrupts. If the better solution involves threading (which seems likely), we're already using boost::thread, so I'd prefer to use that.
I'm posting this question because this seems like the sort of situation that would have a clear "best practices" right answer, since it's a pretty common pattern. What's the right way to do it?
Edit to add: A large part of my concern here is that this is in a spot in the program that's both performance-critical and critical to avoid race conditions or memory leaks. Thus, while "use two threads and a timer" is helpful advice, I'm still left trying to figure out how to actually implement that in a safe and correct way, and I can easily see myself making newbie mistakes in the code that I don't even know I've made. Thus, some actual example code would be really appreciated!
Also, I have a concern about the multiple-threads solution: If I use the "put the blocking call in a second thread and do a timed-wait on that thread" method, what happens to that second thread if the blocked call never returns? I know that the timed-wait in the first thread will return and I'll see that no answer has happened and go on with things, but have I then "leaked" a thread that will sit around in a blocked state forever? Is there any way to avoid that? (Is there any way to avoid that and avoid leaking the second thread's memory?) A complete solution to what I need would need to avoid having leaks if the blocking call doesn't return.
You could use sigaction(2) and alarm(2), which are both POSIX. You set a callback action for the timeout using sigaction, then you set a timer using alarm, then make your blocking call. The blocking call will be interrupted if it does not complete within your chosen timeout (in seconds; if you need finer granularity you can use setitimer(2)).
Note that signals in C are somewhat hairy, and there are fairly onerous restriction on what you can do in your signal handler.
This page is useful and fairly concise:
http://www.gnu.org/s/libc/manual/html_node/Setting-an-Alarm.html
What you want is something like select(2), depending on the OS you are targeting.
It sounds like you need a 'monitor', capable of signaling availability of resource to threads via a shared mutex (typically). In Boost.Thread a condition_variable could do the job.
You might want to look at timed locks: Your blocking method can aquire the lock before starting to wait and release it as soon as the data is availabe. You can then try to acquire the lock (with a timeout) in your timed wait method.
Encapsulate the blocking call in a separate thread. Have an intermediate message buffer in that thread that is guarded by a condition variable (as said before). Make your main thread timed-wait on that condition variable. Receive the intermediately stored message if the condition is met.
So basically put a new layer capable of timed-wait between the API and your application. Adapter pattern.
Regarding
what happens to that second thread if the blocked call never returns?
I believe there is nothing you can do to recover cleanly without cooperation from the called function (or library). 'Cleanly' means cleaning up all resources owned by that thread, including memory, other threads, locks, files, locks on files, sockets, GPU resources... Un-cleanly, you can indeed kill the runaway thread.

Inter-thread communication. How to send a signal to another thread

In my application I have two threads
a "main thread" which is busy most of the time
an "additional thread" which sends out some HTTP request and which blocks until it gets a response.
However, the HTTP response can only be handled by the main thread, since it relies on it's thread-local-storage and on non-threadsafe functions.
I'm looking for a way to tell the main thread when a HTTP response was received and the corresponding data. The main thread should be interrupted by the additional thread and process the HTTP response as soon as possible, and afterwards continue working from the point where it was interrupted before.
One way I can think about is that the additional thread suspends the main thread using SuspendThread, copies the TLS from the main thread using some inline assembler, executes the response-processing function itself and resumes the main thread afterwards.
Another way in my thoughts is, setting a break point onto some specific address in the second threads callback routine, so that the main thread gets notified when the second threads instruction pointer steps on that break point - and therefore - has received the HTTP response.
However, both methods don't seem to be nicely at all, they hurt even if just thinking about them, and they don't look really reliable.
What can I use to interrupt my main thread, saying it that it should be polite and process the HTTP response before doing anything else? Answers without dependencies on libraries are appreciated, but I would also take some dependency, if it provides some nice solution.
Following question (regarding the QueueUserAPC solution) was answered and explained that there is no safe method to have a push-behaviour in my case.
This may be one of those times where one works themselves into a very specific idea without reconsidering the bigger picture. There is no singular mechanism by which a single thread can stop executing in its current context, go do something else, and resume execution at the exact line from which it broke away. If it were possible, it would defeat the purpose of having threads in the first place. As you already mentioned, without stepping back and reconsidering the overall architecture, the most elegant of your options seems to be using another thread to wait for an HTTP response, have it suspend the main thread in a safe spot, process the response on its own, then resume the main thread. In this scenario you might rethink whether thread-local storage still makes sense or if something a little higher in scope would be more suitable, as you could potentially waste a lot of cycles copying it every time you interrupt the main thread.
What you are describing is what QueueUserAPC does. But The notion of using it for this sort of synchronization makes me a bit uncomfortable. If you don't know that the main thread is in a safe place to interrupt it, then you probably shouldn't interrupt it.
I suspect you would be better off giving the main thread's work to another thread so that it can sit and wait for you to send it notifications to handle work that only it can handle.
PostMessage or PostThreadMessage usually works really well for handing off bits of work to your main thread. Posted messages are handled before user input messages, but not until the thread is ready for them.
I might not understand the question, but CreateSemaphore and WaitForSingleObject should work. If one thread is waiting for the semaphore, it will resume when the other thread signals it.
Update based on the comment: The main thread can call WaitForSingleObject with a wait time of zero. In that situation, it will resume immediately if the semaphore is not signaled. The main thread could then check it on a periodic basis.
It looks like the answer should be discoverable from Microsoft's MSDN. Especially from this section on 'Synchronizing Execution of Multiple Threads'
If your main thread is GUI thread why not send a Windows message to it? That what we all do to interact with win32 GUI from worker threads.
One way to do this that is determinate is to periodically check if a HTTP response has been received.
It's better for you to say what you're trying to accomplish.
In this situation I would do a couple of things. First and foremost I would re-structure the work that the main thread is doing to be broken into as small of pieces as possible. That gives you a series of safe places to break execution at. Then you want to create a work queue, probably using the microsoft slist. The slist will give you the ability to have one thread adding while another reads without the need for locking.
Once you have that in place you can essentially make your main thread run in a loop over each piece of work, checking periodically to see if there are requests to handle in the queue. Long-term what is nice about an architecture like that is that you could fairly easily eliminate the thread localized storage and parallelize the main thread by converting the slist to a work queue (probably still using the slist), and making the small pieces of work and the responses into work objects which can be dynamically distributed across any available threads.

Atomic Operation C++

In C++, Windows platform, I want to execute a set of function calls as atomic so that execution doesn't switches to other threads in my process. How do I go about doing that? Any ideas, hints?
EDIT: I have a piece of code like:
someObject->Restart();
WaitForSingleObject(handle, INFINITE);
Now the Restart() function does its work asynchronously, so it returns quickly and when that someObject is restarted it sends me an event from another thread where I signal the event handle on which I'm waiting and thus continue processing. But now the problem is that before the code reaches WaitForSingleObject() part, I receive the restart completion event and I signal the event and after that WaitForSingleObject() never returns since it is not signaled again. That's why I want to execute both Restart() and WaitForSingleObject() as atomic.
This is generally not possible. You can't force the OS to not switch to other threads.
What you can do is one of the following:
Use locks, mutexes, criticals sections or semaphores to synchronize a handful of threads that touch the same data.
Use basic operations that are atomic such as compare-and-exchange or atomic-add in the form of win32 api calls such as InterlockedIncrement() and InterlockedCompareExchange()
You don't want all threads to wait, you just want to wait for the new thread to be done, without the risk of missing the signal. This can be done using a semaphore.
Create a semaphore known by both this code and the code eventually executed by Restart, using CreateSemaphore(NULL,0,1,NULL).
In the code you've shown, you'll still use WaitforSingleObject to wait for your semaphore. When the thread executing the Release code is done with it's work, have it call ReleaseSemaphore.
If ReleaseSemaphore is called first, WaitforSingleObject will let you pass immediately. If WaitforSingleObject is called first, it will wait for ReleaseSemaphore.
MSDN should also help you.
A general solution to lost event race is a counting semaphore.
Are you using PulseEvent() to signal your handle? If so, that's the problem.
According to MSDN,
If no threads are waiting, or if no
thread can be released immediately,
PulseEvent simply sets the event
object's state to nonsignaled and
returns.
So if the handle is signaled before you wait on it, the handle is placed immediately in the nonsignaled state by PulseEvent(). That would appear to be why your are "missing" the event. To correct this, replace PulseEvent() with SetEvent().
With this scenario, though, you may need to reset the event after the wait is complete. This of course depends on if this code is executed more than once during the lifetime of your application. Assuming your waiting thread is the only thread that is waiting on the handle, use CreateEvent() to create an auto reset event. This will automatically reset the handle after your waiting thread is released, making it automatically available for the next time through.
Well, you could suspend (using SuspendThread) all other threads in the process, but I suppose you should rethink design of your program.
This is very easy to fix. Just make sure that the event is the auto-reset event (see the parameters of the CreateEvent) and only call SetEvent to the event handle, never call ResetEvent or PulseEvent or some other things. So the WaitForSingleObject will always return properly. If the event has been already set, the WaitForSingleObject will return immediately and reset the event.
Although I worry about your design in general (ie you are making concurrent tasks sequential, thus losing all the benefits of the hard work to make it concurrent), I think I see the simple solution.
Change your event handle to be MANUAL RESET instead of AUTORESET. (see CreateEvent).
Then you won't miss the signal.
After WaitForSingleObject(...), call ResetEvent().
EDIT:
forget what I just said. That won't work. see comments below.