I have two algorithms to solve a task X ().
How can I get a thread started for algorithm 1 and a thread started for algorithm 2 and wait for the first algorithm to finish after which I kill the other one and proceed?
I have seen that join from std::thread will make me wait for it to finish but I can't do join for both threads, otherwise I will wait for both to complete. I want to issue both of them and wait until one of them completes. What's the best way to achieve this?
You can't kill threads in C++11 so you need to orchestrate their demise.
This could be done by having them loop on an std::atomic<bool> variable and getting the winner to std::call_once() in order to set the return value and flag the other threads to end.
Perhaps a bit like this:
std::once_flag once; // for std::call_once()
void algorithm1(std::atomic<bool>& done, int& result)
{
// Do some randomly timed work
for(int i = 0; !done && i < 3; ++i) // end if done is true
std::this_thread::sleep_for(std::chrono::seconds(std::rand() % 3));
// Only one thread gets to leave a result
std::call_once(once, [&]
{
done = true; // stop other threads
result = 1;
});
}
void algorithm2(std::atomic<bool>& done, int& result)
{
// Do some randomly timed work
for(int i = 0; !done && i < 3; ++i) // end if done is true
std::this_thread::sleep_for(std::chrono::seconds(std::rand() % 3));
// Only one thread gets to leave a result
std::call_once(once, [&]
{
done = true; // stop other threads
result = 2;
});
}
int main()
{
std::srand(std::time(0));
std::atomic<bool> done(false);
int result = 0;
std::thread t1(algorithm1, std::ref(done), std::ref(result));
std::thread t2(algorithm2, std::ref(done), std::ref(result));
t1.join(); // this will end if t2 finishes
t2.join();
std::cout << "result : " << result << '\n';
}
Firstly, don't kill the losing algorithm. Just let it run to completion and ignore the result.
Now, the closest thing to what you asked for is to have a mutex+condvar+result variable (or more likely two results, one for each algorithm).
Code would look something like
X result1, result2;
bool complete1 = false;
bool complete2 = false;
std::mutex result_mutex;
std::condition_variable result_cv;
// simple wrapper to signal when algoN has finished
std::thread t1([&]() { result1 = algo1();
std::unique_lock lock(result_mutex);
complete1 = true;
result_cv.notify_one();
});
std::thread t2([&]() { result2 = algo2();
std::unique_lock lock(result_mutex);
complete2 = true;
result_cv.notify_one();
});
t1.detach();
t2.detach();
// wait until one of the algos has completed
int winner;
{
std::unique_lock lock(result_mutex);
result_cv.wait(lock, [&]() { return complete1 || complete2; });
if (complete1) winner=1;
else winner=2;
}
The other mechanisms, including the future/promise one, require the main thread to busy-wait. The only non-busy-waiting alternative is to move the post-success processing to a call_once: in this case the master thread should just join both children, and the second child will simply return when it finishes processing and realises it lost.
The new C++11 standard offers some methods to solve those problems by using, e.g., futures, promises.
Please have a look at http://de.cppreference.com/w/cpp/thread/future and When is it a good idea to use std::promise over the other std::thread mechanisms?.
Related
What I'm Trying To Do
Hi, I have two types of threads the main one and the workers where the workers are equal to the number of cores on the CPU, what I'm trying to do is when the main thread needs to call an update I set a boolean called Updating to true and call condition_variable(cv).notify_all then each thread will do its work and when done it will increment by one an atomic_int called CoresCompleted followed by a cv.notify_all so that the main thread can check if all the work is done then it will wait for the variable Updating to be false so it is sure that all other threads finished and it doesn't update again, once everything is done the main thread sets updating to false and notifies all.
CODE
Main
void UpdateManager::Update() {
//Prepare Update
CoresCompleted = 0;
Updating = true;
//Notify Update Started
cv.notify_all();
//Wait for Update to end
auto Pre = high_resolution_clock::now();
cv.wait(lk, [] { return (int)UpdateManager::CoresCompleted >= (int)UpdateManager::ProcessorCount; });
auto Now = high_resolution_clock::now();
auto UpdateTime = duration_cast<nanoseconds>(Now - Pre);
//End Update and nofity threads
Updating = false;
cv.notify_all();
}
Workers
void CoreGroup::Work() {
Working = true;
unique_lock<mutex> lk(UpdateManager::m);
while (Working) {
//Wait For Update To Start
UpdateManager::cv.wait(lk, []{ return UpdateManager::Updating; });
if (!Working)
return;
//Do Work
size_t Size = Groups.size();
auto Pre = high_resolution_clock::now();
for (size_t Index = 0; Index < Size; Index++)
Groups[Index]->Update();
auto Now = high_resolution_clock::now();
UpdateTime = duration_cast<nanoseconds>(Now - Pre);
//Increment CoresCompleted And Notify All
UpdateManager::CoresCompleted++;
UpdateManager::cv.notify_all();
//Wait For Update To End
UpdateManager::cv.wait(lk, []{ return !UpdateManager::Updating; });
}
}
Problem
Once the workers reach the last wait where they wait for Updating to be false they get stuck and never leave, for some reason the last notify_all in the main thread is not reaching the workers, I tried searching and looked for many examples but I can't figure out why it isn't triggering, maybe I miss understood how the cv and lock works, any ideas why this is happening and how to fix?
Here is how your code works:
some waiting in Update is over when notified:
cv.wait(lk, [] { return (int)UpdateManager::CoresCompleted >= (int)UpdateManager::ProcessorCount; });
It goes out of waiting and requires the lock on the mutex. Proceed to do its stuff then reaches the end and notifies the other thread that they can continue working with this line:
cv.notify_all();
But it lies, they can't continue working because you hold the lock. Release it and they will proceed working:
void UpdateManager::Update() {
<...>
//End Update and nofity threads
Updating = false;
lk.unlock();
cv.notify_all();
}
That probably isn't the only issue in this code but I assume that you lock the mutex before entering the Update method or have some guarantee that it runs before the other one (Work).
I want to check in one thread A if a condition is met,
if the condition is true I want another thread B to execute my code, once that is done, I want thread B to wait until that condition is true again, then it executes the code again, and so on. There is enough time to execute all the code in thread B before the condition is false. Basically thread A runs at normal speed, thread B only runs when thread A tells it it can run. And I don't want to spawn a new thread B all the time, it shouldn't stop, it should just execute it's code and then wait until it's allowed to execute it's code again.
How can I do that? Below is what I have so far, but I don't how to run mainExecution() in this type of loop?
std::mutex m;
std::condition_variable cv_can_execute;
bool b_can_execute = false;
void mainExection() {
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
doSomethingElse();
}
void canExecute() {
std::unique_lock lk(m);
while (true) {
condition = canRun();
if (condition) {
b_can_execute = true;
cv_can_execute.notify_all();
}
else {
b_can_execute = false;
}
}
b_add_done = true;
cv_add_done.notify_all();
}
int main() {
std::thread canExec(canExecute);
std::thread mainExec(mainExection);
canExec.join();
mainExec.join();
}
In your code both threads immediately lock mutex m, so only one can run at a time.
That's why you don't see the behavior you expect.
You should only lock the mutex when you want to touch shared memory,in your case b_can_execute. The code should look something like this:
void mainExection() {
{
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
} // Here the lock is released so A can do work.
doSomethingElse();
}
void canExecute() {
// std::unique_lock lk(m); Remove this
while (true) {
condition = canRun();
if (condition) {
{
std::unique_lock lk(m); // Lock to change shred variable.
b_can_execute = true;
} // Unlock here, so B can run
// It's best to unlock before you notify, so that B doesn't wake just to block again.
cv_can_execute.notify_all();
}
else {
std::unique_lock lk(m);
b_can_execute = false;
}
}
{
std::unique_lock lk(m);
b_add_done = true;
}
cv_add_done.notify_all();
}
Now, in your case you only lock the mutex to synchronize on a bool. This is usually seen as overkill as the cost of lock and unlocking is relatively high. You could try to look at atomic variables which would replace your bool and allow the threads to synchronize without the use of the mutex.
I need to launch working thread, perform some initialization, return data structure as initialization result and continue thread execution. What is the best (or possible) code to achieve this using modern c++ features only? Note, launched thread should continue its execution (thread does not terminated as usual). Unfortunately, most solutions assume worker thread termination.
Pseudo code:
// Executes in WorkerThread context
void SomeClass::Worker_treadfun_with_init()
{
// 1. Initialization calls...
// 2. Pass/signal initialization results to caller
// 3. Continue execution of WorkerThread
}
// Executes in CallerThread context
void SomeClass::Caller()
{
// 1. Create WorkerThread with SomeClass::Worker_treadfun_with_init()" thread function
// 2. Sleep thread for some initialization results
// 3. Grab results
// 3. Continue execution of CallerThread
}
I think std::future meets your requirements.
// Executes in WorkerThread context
void SomeClass::Worker_treadfun_with_init(std::promise<Result> &pro)
{
// 1. Initialization calls...
// 2. Pass/signal initialization results to caller
pro.set_value(yourInitResult);
// 3. Continue execution of WorkerThread
}
// Executes in CallerThread context
void SomeClass::Caller()
{
// 1. Create WorkerThread with SomeClass::Worker_treadfun_with_init()" thread function
std::promise<Result> pro;
auto f=pro.get_future();
std::thread([this,&pro](){Worker_treadfun_with_init(pro);}).detach();
auto result=f.get();
// 3. Grab results
// 3. Continue execution of CallerThread
}
Try using a pointer or reference to the data structure with the answer in it, and std::condition_variable to let you know when the answer has been computed:
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <chrono>
#include <vector>
std::vector<double> g_my_answer;
std::mutex g_mtx;
std::condition_variable g_cv;
bool g_ready = false;
void Worker_treadfun_with_init()
{
//Do your initialization here
{
std::unique_lock<std::mutex> lck( g_mtx );
for( double val = 0; val < 10; val += 0.3 )
g_my_answer.push_back( val );
g_ready = true;
lck.unlock();
g_cv.notify_one();
}
//Keep doing your other work..., here we'll just sleep
for( int i = 0; i < 100; ++i )
{
std::this_thread::sleep_for( std::chrono::seconds(1) );
}
}
void Caller()
{
std::unique_lock<std::mutex> lck(g_mtx);
std::thread worker_thread = std::thread( Worker_treadfun_with_init );
//Calling wait will cause current thread to sleep until g_cv.notify_one() is called.
g_cv.wait( lck, [&g_ready](){ return g_ready; } );
//Print out the answer as the worker thread continues doing its work
for( auto val : g_my_answer )
std::cout << val << std::endl;
//Unlock mutex (or better yet have unique_lock go out of scope)
// incase worker thread needs to lock again to finish
lck.unlock();
//...
//Make sure to join the worker thread some time later on.
worker_thread.join();
}
Of course in actual code you wouldnt use global variables, and instead pass them by pointer or reference (or as member variables of SomeClass) to the worker function, but you get the point.
I have a program that starts N number of threads (async/future). I want the main thread to set up some data, then all threads should go while the main thread waits for all of the other threads to finish, and then this needs to loop.
What I have atm is something like this
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
bRun = true;
}
run.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bDone; });
}
//Reset bools
bRun = false;
bDone = false;
}
//Get results from futures once complete
}
int thread()
{
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bRun; });
bDone = true;
//Do thread stuff here
lock.unlock();
run.notify_all();
}
}
But I can't see any signs of either the main or the other threads waiting for each other! Any idea what I am doing wrong or how I can do this?
There are a couple of problems. First, you're setting bDone as soon as the first worker wakes up. Thus the main thread wakes immediately and begins readying the next data set. You want to have the main thread wait until all workers have finished processing their data. Second, when a worker finishes processing, it loops around and immediately checks bRun. But it can't tell if bRun == true means that the next data set is ready or if the last data set is ready. You want to wait for the next data set.
Something like this should work:
std::mutex mrun;
std::condition_variable dataReady;
std::condition_variable workComplete;
int nCurrentIteration = 0;
int nWorkerCount = 0;
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
nWorkerCount = N;
++nCurrentIteration;
}
dataReady.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
workComplete.wait(lock, [] { return nWorkerCount == 0; });
}
}
//Get results from futures once complete
}
int thread()
{
int nNextIteration == 1;
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
dataReady.wait(lock, [&nNextIteration] { return nCurrentIteration==nNextIteration; });
lock.unlock();
++nNextIteration;
//Do thread stuff here
lock.lock();
if (--nWorkerCount == 0)
{
lock.unlock();
workComplete.notify_one();
}
}
}
Be aware that this solution isn't quite complete. If a worker encounters an exception, then the main thread will hang (because the dead worker will never reduce nWorkerCount). You'll likely need a strategy to deal with that scenario.
Incidentally, this pattern is called a barrier.
I'm trying to do this with the C++11 concurrency support.
I have a sort of thread pool of worker threads that all do the same thing, where a master thread has an array of condition variables (one for each thread, they need to 'start' synchronized, ie not run ahead one cycle of their loop).
for (auto &worker_cond : cond_arr) {
worker_cond.notify_one();
}
then this thread has to wait for a notification of each thread of the pool to restart its cycle again. Whats the correct way of doing this? Have a single condition variable and wait on some integer each thread that isn't the master is going to increase? something like (still in the master thread)
unique_lock<std::mutex> lock(workers_mtx);
workers_finished.wait(lock, [&workers] { return workers = cond_arr.size(); });
I see two options here:
Option 1: join()
Basically instead of using a condition variable to start the calculations in your threads, you spawn a new thread for every iteration and use join() to wait for it to be finished. Then you spawn new threads for the next iteration and so on.
Option 2: locks
You don't want the main-thread to notify as long as one of the threads is still working. So each thread gets its own lock, which it locks before doing the calculations and unlocks afterwards. Your main-thread locks all of them before calling the notify() and unlocks them afterwards.
I see nothing fundamentally wrong with your solution.
Guard workers with workers_mtx and done.
We could abstract this with a counting semaphore.
struct counting_semaphore {
std::unique_ptr<std::mutex> m=std::make_unique<std::mutex>();
std::ptrdiff_t count = 0;
std::unique_ptr<std::condition_variable> cv=std::make_unique<std::condition_variable>();
counting_semaphore( std::ptrdiff_t c=0 ):count(c) {}
counting_semaphore(counting_semaphore&&)=default;
void take(std::size_t n = 1) {
std::unique_lock<std::mutex> lock(*m);
cv->wait(lock, [&]{ if (count-std::ptrdiff_t(n) < 0) return false; count-=n; return true; } );
}
void give(std::size_t n = 1) {
{
std::unique_lock<std::mutex> lock(*m);
count += n;
if (count <= 0) return;
}
cv->notify_all();
}
};
take takes count away, and blocks if there is not enough.
give adds to count, and notifies if there is a positive amount.
Now the worker threads ferry tokens between two semaphores.
std::vector< counting_semaphore > m_worker_start{count};
counting_semaphore m_worker_done{0}; // not count, zero
std::atomic<bool> m_shutdown = false;
// master controller:
for (each step) {
for (auto&& starts:m_worker_start)
starts.give();
m_worker_done.take(count);
}
// master shutdown:
m_shutdown = true;
// wake up forever:
for (auto&& starts:m_worker_start)
starts.give(std::size_t(-1)/2);
// worker thread:
while (true) {
master->m_worker_start[my_id].take();
if (master->m_shutdown) return;
// do work
master->m_worker_done.give();
}
or somesuch.
live example.