Should std::async be called in a game loop c++? - c++

How efficient is the call to std::async? Can it be used to issue a task in a game loop?
I want all my input detection to be on a separate thread and synced at a certain point in the game loop in my main thread so that I can poll for input.
The only way I can think of doing this is to split up my tasks for input detection and call them using std::async at the beginning of the actual game loop and then call wait() later in the loop to sync the data, but I want that same behavior EVERY iteration of the loop so this call must be expensive...
Is that the way?

Assuming it's well written then std::async(std::launch::async, ...) should be no more expensive than a small heap allocation and constructing a std::thread. If creating a new std::thread to do the work is efficient enough for you, then std::async will be efficient enough but will save you the trouble of writing the synchronisation to get the result back to the main thread.
If creating a new std::thread for each piece of work is not appropriate, then std::async might not be either.
(N.B. remember that unless you specify std::launch::async as the launch policy there's no guarantee the task executes asynchronously, it might be deferred until you call get() on the returned future.)

At least IMO, you should make up your mind between polling and asynchronous operation.
If you're going to poll, then using std::async is redundant. You're going to poll from the main thread anyway, so you might as well just have it directly poll for what it cares about and be done with it. Using std::async to launch something else will simply add a delay in getting the data to the main thread.
If you're going to use std::async, then you should take a rather different approach: the threads that get the input act independently. When they find some input, they send it to the main thread, and tell it that it has some input to process (e.g., by setting a semaphore). Then the main thread reacts to that semaphore, retrieves the input data (e.g., from a queue) and processes it.
In the latter case, polling is pointless: if the input thread hasn't told the main thread about some input data, then there simply isn't any. That being the case, polling for input data is pointless -- you already know there is none.

Related

How do I signal a std::thread to exit gracefully?

Using C++17, for a worker thread with a non-blocking loop in it that performs some task, I see three ways to signal the thread to exit:
A std::atomic_bool that the thread checks in a loop. If it is set to true, the thread exits. The main thread sets it to true before invoking std::thread::join().
A std::condition_variable with a bool. This is similar to the above, except it allows you to invoke std::condition_variable::wait_for() to effectively "sleep" the thread (to lower CPU usage) while it waits for a potential exit signal (via setting the bool, which is checked in the 3rd argument to wait_for() (the predicate). The main thread would lock a mutex, change the bool to true, and invoke std::condition_variable::notify_all() before invoking std::thread::join() to signal the thread to exit.
A std::future and std::promise. The main thread holds a std::promise<void> while the worker thread holds the corresponding std::future<void>. The worker thread uses std::future::wait_for() similar to the step above. Main thread invokes std::promise::set_value() before calling std::thread::join().
My thoughts on each:
This is simple, but lacks the ability to "slow down" the worker thread loop without explicitly calling std::this_thread::sleep_for(). Seems like an "old fashioned" way of doing thread signals.
This one is comprehensive, but very complicated, because you need a condition variable plus a boolean variable.
This one seems like the best option, because it has the simplicity of #1 without the verbosity of #2. But I have no personal experience with std::future and std::promise yet, so I am not sure if it's the ideal solution. In my mind, promise & future are meant to transfer values across threads, not really be used as signals. So I'm not sure if there are efficiency concerns.
I see multiple ways of signaling a thread to exit. And sadly, my Google searching has only introduced more as I keep looking, without actually coming to a general consensus on the "modern" and/or "best" way of doing this with C++17.
I would love to see some light shed on this confusion. Is there a conclusive, definitive way of doing this? What is the general consensus? What are the pros/cons of each solution, if there is no "one size fits all"?
If you have a busy working thread which requires one-way notification if it should stop working the best way is to just use an atomic<bool>. It is up to the worker thread if it wants to slow down or it doesn't want to slow down. The requirement to "throttle" the worker thread is completely orthogonal to the thread cancellation and, in my opinion, should not be considered with the cancellation itself. This approach, to my knowledge, has 2 drawbacks: you can't pass back the result (if any) and you can't pass back an exception (if any). But if you do not need any of those then use atomic<bool> and don't bother with anything else. It is as modern as any; there is nothing old-fashioned about it.
condition_variable is part of the consumer/producer pattern. So there is something which produces work and there is something that consumes what was produced. To avoid busy waiting for the consumer while there is nothing to consume the condition_variable is a great option to use. It is just a perfect primitive for such tasks. But it doesn't make sense for the thread cancellation process. And you will have to use another variable anyway because you can't rely on condition_variable alone. It might spuriously wake up the thread. You might "set" it before it gets in the waiting process, losing the "set" completely, and so on. It just can't be used alone so we back to square one but now with an atomic<bool> variable to accompany our condition_variable
The future/promise pair is good when you need to know the result of the operation done on the other thread. So it is not a replacement of the approach with the atomic<bool> but it rather complements it. So to remove the drawbacks described in the first paragraph you add future/promise to the equation. You provide the calling side with the future extracted from the promise which lives within the thread. That promise gets set once the thread is finished:
Because exception is thrown.
Because thread has done its work and completed on its own.
Because we asked it to stop by setting the atomic<bool> variable.
So as you see the future/promise pair just helps to provide some feedback for the callee it has nothing to do with the cancellation itself.
P.S. You can always use an electric sledgehammer to crack a nut but it doesn't make the approach any more modern.
I can't say that this is conclusive, or definitive, but since this is somewhat an opinion question, I'll give an answer that it is based upon a lot of trial and error to solve the kind of problem you are asking about (I think).
My preferred pattern is to signal the thread to stop using atomic bool, and control the 'loop' timing with a condition variable.
We ran into the requirement for running repeating tasks on worker threads so often that we created a class that we called 'threaded_worker'. This class handles the complexities of aborting the thread, and timing the calls to the worker function.
The abort is handled via a method that sets the atomic bool 'abort' signal which tells the thread to stop calling the work function and terminate.
The loop timing can be controlled by methods that set the wait time for the condition variable. The thread can be released to continue via method that calls the notify on the condition variable.
We use the class as a base class for all kinds of objects that have some function that needs to execute on a separate thread. The class is designed to run the 'work' function once, or in a loop.
We use the bool for the abort, because it is simple and suitable to do the job. We use the condition variable for loop timing, because it has the benefit of being notified to 'short circuit' the timing. This is very useful when the threaded object is a consumer. When a producer has work for the threaded object, it can queue the work and notify that the work is available. The threaded object immediately continues, instead of waiting for the specified wait time on the condition variable.
The reason for both (the abort signal, and the condition variable) is that I see terminating the thread as one function, and timing the loop as another.
We used to time loops by putting the thread to sleep for some duration. This made it almost impossible to get predictable loop timing on Windows computers. Some computers will return from sleep(1) in about 1ms, but others will return in 15ms. Our performance was highly dependent on the specific hardware. Using condition variables we have greatly improved the timing of critical tasks. The added benefit of notifying a waiting thread when work is available is more than worth the complexity of the condition variable.

Is It Possible to Send a Signal to a Specific pthread ID?

I'm working on a multi-thread scheduling assignment, which involves adding threads to a variety of queues and selecting the appropriate one to execute.
The pthread_cond_signal(&condition) command is completely asynchronous from what I can tell; it's simply thrown into memory and the first thread to find it with the appropriate pthread_cond_wait() will consume it.
However, say I have a vector of thread ids that have been pushed as the thread is created, ie:
threadIDVector1[0] = 3061099328
threadIDVector1[1] = 3077884736
...
threadIDVector2[0] = 3294747394
threadIDVector2[1] = 3384567393
...
etc.
And I wanted to send a signal specifically to the thread with an id that matches the appropriate element of a vector. I.e. the algorithm would be:
While (at least one threadVector is non-empty):
Look at the first element in each vector
Select the appropriate one to signal by some criteria
Send a signal to ONLY that thread
Complete the thread and remove from threadIDVectorX
Is there some way to execute the above, or some accepted standard for achieving the same result?
There is no way to "send" a signal to a specific thread, nor to know which thread among many will be woken by the OS. It is entirely non-deterministic.
You could use the "multiple condition variable" solution as proposed in the comments. But my preferred solution to something like this is a pipe or socket pair. Have the thread doing the waking write something (like a single byte) to the pipe for the corresponding thread to signal it.
This has a lot of benefits in my book. First, it allows bidirectional communication. Your pseudocode loop at the end of your question seems to also want to remove a finished thread from the list, so you need to know when that thread is done. You could have another CV, or you could have the completing thread write a single byte back to the manager object before exiting. Much easier, I feel.
It also allows you to choose between blocking or nonblocking I/O, or to use synchronous multiplexing with select(2) or epoll(2). If you were not exiting from the worker threads, but instead wanted to reuse them, the notifying thread would need to know when they're ready for more work. Again, a CV would be fine here, but the file-descriptor approach allows the notifier to wait for all of the worker threads in a single select(2) call.
The last thing is that I find files simpler. pthreads are pretty complicated, and multithreading is already hard enough to get right. I find that files are easier to manage and reason about in a multithreaded context, making it easier to avoid locking or crashes.

How do I execute a C++ function asynchronously and not block/wait?

I want to execute a function asynchronously and not wait for it to complete. I initially thought I could use std::async with launch::async, but the returned future's destructor blocks until the function is complete.
Is there a way of running a function on a thread pool using stl without blocking?
You should spawn a single new thread which waits on a counting semaphore. When it is awoken (unblocked), it will send one RPC request and decrement the counter. When the user clicks the button, increment the counter. The same thread can service all requests throughout the program's lifetime.
You're looking for std::thread::detach. http://en.cppreference.com/w/cpp/thread/thread/detach
You can create a thread, and then detach from it. At that point you can delete your thread handle and the thread will run without you.
Incidentally it's usually considered bad form to use this technique. Generally you should care about the state of the thread, and should try to shut it down gracefully at program end, but in practice this is a useful trick for when you really don't care.
This proposal talks about executors... it looks like the kind of thing I was hoping I'd find existed already, but it looks like it doesn't.
http://isocpp.org/files/papers/n4039.html

Check if thread is finished in c++11?

I need a way to know if the thread I am running is finished or not. If it is not finished it wait and if it is finished print a successful message.
I don't find a method or something like that in c++11 from library thread.
I can't set a global variable because inside thread I am using execvp and it does not return if successful.
Is there any way to do that? A method or flag or anything else.
Edit: To make it clear I want to write a function that checks if thread is finished.
You can use C++11 futures and promises:
Futures are a high level mechanism for passing a value between threads, and allow a thread to wait for a result to be available without having to manage the locks directly.
The benefits of futures over using plain threads are:
Future returns a value.
You can wait on a future with a timeout.
Surely, this can be done with plain threads, but why reimplement a wheel with C++11.

pthreads: perform function on main thread

I'm looking for the analogous of Cocoa's
-[NSObject performSelectorOnMainThread: withObject: waitUntilDone:]
method.
So basically I have a function that does some work on a separate thread but it must perform some synchronous calls that need to be performed on the main one.
in cocoa, the message is added to the run loop, which is cleared as part of its iteration.
to simulate this:
you'll a want a run loop
an abstract message system
and a reference counting mechanism (in most cases)
a way to add those messages to a run loop for scheduled execution
timers would be a nice addition
to accomplish something similar using pthread interfaces exclusively, start by reading up on conditions pthread_cond_t.
i know of no pthread interface with a 1-1 relationship for what you're trying to accomplish. conditions also operate without run loops, so you may need to bring that to the table, if you do not reuse a run loop implementation. if you use run loops, then you just need a lock to add messages to a thread with a run loop.
pthreads are a very low-level abstraction, so there's no easy way to do this with raw pthreads. Typically you'll want to write to a file descriptor to wake up an event loop on the main thread, then pass it a pointer to the function you want to run. You could even write pointer values onto a pipe(), then have the main thread execute them.
To wait synchronously, you can simply have a mutex and condition variable, plus completion flag on these execution request objects. Have the child thread wait on the mutex/condvar/completion flag, then in the main thread (under the mutex) set the flag and signal the cvar. Cleanup of the request structure would be done in the child.
To be more specific, it'd help if you could mention what event loop you have running on your main thread.