C++/Windows Multi threaded synchronization/Data Sharing - c++

My requirement is that a single frame of data is to be processed by two methods in parallel (they need to be parallel because they are which are computationally demanding).
Based on the result of either of the threads, the other need to be stopped.
That is if method 1 returns TRUE first, method 2 should be stopped.
If method 1 returns FALSE first, method 2 should not be stopped.
Similarly, if method 2 returns TRUE first, method 1 should be stopped.
If method 2 returns FALSE first, method 1 should not be stopped.
Please note that method 1 and method 2 are library calls (black box) and I don't have access to their internals. All I know is that they are computationally intense.
How can I implement it in C++/Windows? Any suggestions?

Take a look at the concurrency runtime.
Specifically the task namespace (http://msdn.microsoft.com/en-us/library/dd492427.aspx) and the when_any function (http://msdn.microsoft.com/en-us/library/hh749973.aspx).
concurrency::when_any will create a task that completes when any of the input tasks complete.

No matter if you use plain Windows threads, std::thread, Task Parallelism, or whatever library you prefer, you're still not going to achieve what you want given the details you provided in your question.
While you can certainly figure out when the first thread/task is finished (e.g. #j-w's answer), you cannot really stop the other task gracefully without telling your "blackbox library function" to stop (unless it provides a ways for explicit early cancellation). You didn't indicate the blackbox function can be told to cancel midway, so I'm assuming it is not.
You cannot simply kill the thread/task since this would create resource leaks and maybe even other nasty stuff such as dealocks, etc. depending on what your blackbox function does.
So, you could go with something like when_any, or other synchronization/signaling primitives, and just let the other thread/task continue to run even though you don't need the result, "un-blackbox" your library functions and add cancellation support, or forget about it altogether.

Related

do I have to call get or wait on a std::async future

If I run it with launch::async then I know it will run anyway (I think thats what I read), but do I have to call get / wait in order to perform some sort of clean up.
I dont need the result, I just want a nice fire and forget.
You should call get or wait if you want to ensure that the task finishes. If you dont call get or wait the system will terminate the task when the parent thread terminates.
This can have undefined behaviour if youre dealing with resource management in the child thread (ie pointers or something on the heap). Even if this isn't explicit in the parent thread, it could creep up somewhere else in your program.
In addition, it would be confusing for other programmers who might be unsure whether you intentionally excluded the call to get/wait or forgot it accidentally
If you want to make sure no one will ever think that you mistakenly forgot to use get, I'll recommend you to use std::thread instead, and then call .detach on it. This way no one will be able to call .join on it, because it won't be joinable anymore.
For more details, see: Can I use std::async without waiting for the future limitation?
Put attention, that destructor of returned future object will perform blocking wait until your task action will finish (and the corresponded shared state will become ready).
See last paragraph on the page: https://en.cppreference.com/w/cpp/thread/future/~future
"I just want a nice fire and forget." -- then std::async is not the right tool. Just launch a thread and detach it:
std::thread thr(my_function);
thr.detach();
std::async is basically about computing a result, possibly in a separate thread, and eventually being able to access that result. It returns an object of type std::future<whatever> that gives you that result. If you don't care about the result, you don't need the bookkeeping overhead. Further, it's possible that std::async won't do anything at all until you try to get the result. So if you don't try to get the result, you don't get "fire and forget" in any meaningful sense.

I have a loop that calls ID3D11DeviceContext::CopySubresourceRegion. How can I force a wait on that?

As the title says, I'm doing a CopySubresourceRegion in a loop, and at some point in there I need to force a wait until it completes. From MSDN's documenation, it looks like I can call ID3D11DeviceContext::Flush, then ID3D11DeviceContext::GetData on an event Query created by ID3D11Device::CreateQuery with D3D11_QUERY_EVENT.
I've tried that, and it SEEMS to be working on my tests so far, but there are things I'm uncertain about.
Would it work correctly if I called CreateQuery just once before the loop begins and use that query repeatedly with each GetData call?
Should I destroy the query after creating it to prevent leaking queries? There doesn't seem to be DestroyQuery method, so maybe call free on my ID3D11Query*?
If I can a call to either ID3D11DeviceContext::Map or Unmap before I need to wait on the copy to finish, do I still need Flush?
Why do you need an explicit wait for completion ? D3D11 is shielded internally for life duration an in-flight usage already. If you call Map, the system make sure to wait for completion for you.
Usually it is the opposite behavior we desire, to be able to query for completion in a non blocking way to know when it is safe to call Map, by using Queries.
For 2. Queries are like any other resources in D3D11, you destroy them by calling Release and you can reuse them, create a pool of queries, mark them used when used, then mark them available again once you were able to collect the data with GetData

Executing function for some amount of time

I am sorry if this was asked before, but I didn't find anything related to this. And this is for my understanding. It's not an home work.
I want to execute a function only for some amount of time. How do I do that? For example,
main()
{
....
....
func();
.....
.....
}
function func()
{
......
......
}
Here, my main function calls another function. I want that function to execute only for a minute. In that function, I will be getting some data from the user. So, if user doesn't enter the data, I don't want to be stuck in that function forever. So, Irrespective of whether function is completed by that time or it is not completed, I want to come back to the main function and execute the next operation.
Is there any way to do it ? I am on windows 7 and I am using VS-2013.
Under windows, the options are limited.
The simplest option would be for func() to explicitly and periodically check how long it has been executing (e.g. store its start time, periodically check the amount of time elapses since that start time) and return if it has gone longer than you wish.
It is possible (C++11 or later) to execute the function within another thread, and for main() to signal that thread when the required time period has elapsed. That is best done cooperatively. For example, main() sets a flag, the thread function checks that flag and exits when required to. Such a flag is usually best protected by a critical section or mutex.
An extremely unsafe way under windows is for main() to forceably terminate the thread. That is unsafe, as it can leave the program (and, in worst cases, the operating system itself) in an unreliable state (e.g. if the terminated thread is in the process of allocating memory, if it is executing certain kernel functions, manipulating global state of a shared DLL).
If you want better/safer options, you will need a real-time operating system with strict memory and timing partitioning. To date, I have yet to encounter any substantiated documentation about any variant of Windows and unix (not even real time variants) with those characteristics. There are a couple of unix-like systems (e.g. LynxOS) with variants that have such properties.
I think a part of your requirement can be met using multithreading and a loop with a stopwatch.
Create a new thread.
Start a stopwatch.
Start a loop with one minute as the condition for the loop.
During each iteration check if the user has entered the input and process.
when one minute is over, the loop quits.
I 'am not sure about the feasibility about this idea, just shared my idea. I don't know much about c++, but in Node.js your requirement can be achieved using 'events'. May be such things exists in C++ too.

Spawn a new thread as soon as another has finished

I've an expensive function that need to be executed 1000 times. Execution can take between 5 seconds and 10 minutes. It has thus a high variation.
I like to have multiple threads working on it. My current implementation devised these 1000 calls in 4 times 250 calls and spawns 4 threads. However, if one thread has a "bad day", it has much longer to finish compared to the other 3 threads.
Hence I like to do a new call to the function whenever a thread has finished a previous call - until all 1000 calls have been made.
I think a thread-pool would work - but if ever possible I like to have a simple method (=as less additional code as possible). Also task-based design goes into this direction (I think). Is there an easy solution for this?
Initialize a semaphore with 1000 units. Have each of the 4 threads loop around a semaphore wait() and the work function.
All the threads will then work on the function until it has been executed 1000 times. Even if three of the threads get stuck and take ages, the fourth will handle the other 997 calls.
[Edit]
Meh.. aparrently, the standard C++11 library does not include semaphores. A semaphore is, however, a basic OS sunchro primitive and so should be easy enough to call, eg. with POSIX.
You can use either one of the reference implementation of Exectuors and then call the function via
#include <experimental/thread_pool>
using std::experimental::post;
using std::experimental::thread_pool;
thread_pool pool_{1};
void do_big_task()
{
for (auto i : n)
{
post(pool_, [=]
{
// do your work here;
});
}
}
Executors are coming in C++17 so I thought I would get in early.
Or if you want to try another flavour of executors then there is a more recent implementation with a slightly different syntax.
Given that you have already been able to segment the calls into separate entities and the threads to handle. Once approach is to use std::package_task (with its associated std::future) to handle the function call, and place them in a queue of some sort. In turn, each thread can pick up the packaged tasks and process them.
You will need to lock the queue for concurrent access, there may be some bottle necking here, but compared to the concern that a thread can have "a bad day", this should be minimal. This is effectively a thread pool, but it allows you some control over the execution of the tasks.
Another alternative is to use std::async and specify its launch policy as std::launch::async, the disadvantage it that you do not control the thread creation itself, so you are dependent on how efficient your standard library is controlling the threads vs. how many cores you have.
Either approach would work, the key would be to measure the performance of the approaches over a reasonable sample size. The measure should be for time and resource use (threads and keeping the cores busy). Most OSes will include ways of measuring the resource usage of the process.

Why should I use std::async?

I'm trying to explore all the options of the new C++11 standard in depth, while using std::async and reading its definition, I noticed 2 things, at least under linux with gcc 4.8.1 :
it's called async, but it got a really "sequential behaviour", basically in the row where you call the future associated with your async function foo, the program blocks until the execution of foo it's completed.
it depends on the exact same external library as others, and better, non-blocking solutions, which means pthread, if you want to use std::async you need pthread.
at this point it's natural for me asking why choosing std::async over even a simple set of functors ? It's a solution that doesn't even scale at all, the more future you call, the less responsive your program will be.
Am I missing something ? Can you show an example that is granted to be executed in an async, non blocking, way ?
it's called async, but it got a really "sequential behaviour",
No, if you use the std::launch::async policy then it runs asynchronously in a new thread. If you don't specify a policy it might run in a new thread.
basically in the row where you call the future associated with your async function foo, the program blocks until the execution of foo it's completed.
It only blocks if foo hasn't completed, but if it was run asynchronously (e.g. because you use the std::launch::async policy) it might have completed before you need it.
it depends on the exact same external library as others, and better, non-blocking solutions, which means pthread, if you want to use std::async you need pthread.
Wrong, it doesn't have to be implemented using Pthreads (and on Windows it isn't, it uses the ConcRT features.)
at this point it's natural for me asking why choosing std::async over even a simple set of functors ?
Because it guarantees thread-safety and propagates exceptions across threads. Can you do that with a simple set of functors?
It's a solution that doesn't even scale at all, the more future you call, the less responsive your program will be.
Not necessarily. If you don't specify the launch policy then a smart implementation can decide whether to start a new thread, or return a deferred function, or return something that decides later, when more resources may be available.
Now, it's true that with GCC's implementation, if you don't provide a launch policy then with current releases it will never run in a new thread (there's a bugzilla report for that) but that's a property of that implementation, not of std::async in general. You should not confuse the specification in the standard with a particular implementation. Reading the implementation of one standard library is a poor way to learn about C++11.
Can you show an example that is granted to be executed in an async, non blocking, way ?
This shouldn't block:
auto fut = std::async(std::launch::async, doSomethingThatTakesTenSeconds);
auto result1 = doSomethingThatTakesTwentySeconds();
auto result2 = fut.get();
By specifying the launch policy you force asynchronous execution, and if you do other work while it's executing then the result will be ready when you need it.
If you need the result of an asynchronous operation, then you have to block, no matter what library you use. The idea is that you get to choose when to block, and, hopefully when you do that, you block for a negligible time because all the work has already been done.
Note also that std::async can be launched with policies std::launch::async or std::launch::deferred. If you don't specify it, the implementation is allowed to choose, and it could well choose to use deferred evaluation, which would result in all the work being done when you attempt to get the result from the future, resulting in a longer block. So if you want to make sure that the work is done asynchronously, use std::launch::async.
I think your problem is with std::future saying that it blocks on get. It only blocks if the result isn't already ready.
If you can arrange for the result to be already ready, this isn't a problem.
There are many ways to know that the result is already ready. You can poll the future and ask it (relatively simple), you could use locks or atomic data to relay the fact that it is ready, you could build up a framework to deliver "finished" future items into a queue that consumers can interact with, you could use signals of some kind (which is just blocking on multiple things at once, or polling).
Or, you could finish all the work you can do locally, and then block on the remote work.
As an example, imagine a parallel recursive merge sort. It splits the array into two chunks, then does an async sort on one chunk while sorting the other chunk. Once it is done sorting its half, the originating thread cannot progress until the second task is finished. So it does a .get() and blocks. Once both halves have been sorted, it can then do a merge (in theory, the merge can be done at least partially in parallel as well).
This task behaves like a linear task to those interacting with it on the outside -- when it is done, the array is sorted.
We can then wrap this in a std::async task, and have a future sorted array. If we want, we could add in a signally procedure to let us know that the future is finished, but that only makes sense if we have a thread waiting on the signals.
In the reference: http://en.cppreference.com/w/cpp/thread/async
If the async flag is set (i.e. policy & std::launch::async != 0), then
async executes the function f on a separate thread of execution as if
spawned by std::thread(f, args...), except that if the function f
returns a value or throws an exception, it is stored in the shared
state accessible through the std::future that async returns to the
caller.
It is a nice property to keep a record of exceptions thrown.
http://www.cplusplus.com/reference/future/async/
there are three type of policy,
launch::async
launch::deferred
launch::async|launch::deferred
by default launch::async|launch::deferred is passed to std::async.