I've implemented a "Ticket" class which is shared as a shared_ptr between multiple threads.
The program flow is like this:
parallelQuery() is called to start a new query job. A shared instance of Ticket is created.
The query is split into multiple tasks, each task is enqueued on a worker thread (this part is important, otherwise I'd just join threads and done). Each task gets the shared ticket.
ticket.wait() is called to wait for all tasks of the job to complete.
When one task is done it calls the done() method on the ticket.
When all tasks are done the ticket is unlocked, result data from the task aggregated and returned from parallelQuery()
In pseudo code:
std::vector<T> parallelQuery(std::string str) {
auto ticket = std::make_shared<Ticket>(2);
auto task1 = std::make_unique<Query>(ticket, str+"a");
addTaskToWorker(task1);
auto task2 = std::make_unique<Query>(ticket, str+"b");
addTaskToWorker(task2);
ticket->waitUntilDone();
auto result = aggregateData(task1, task2);
return result;
}
My code works. But I wonder if it is theoretically possible that it can lead to a deadlock in case when unlocking the mutex is executed right before it gets locked again by the waiter thread calling waitUntilDone().
Is this a possibility, and how to avoid this trap?
Here is the complete Ticket class, note the execution order example comments related to the problem description above:
#include <mutex>
#include <atomic>
class Ticket {
public:
Ticket(int numTasks = 1) : _numTasks(numTasks), _done(0), _canceled(false) {
_mutex.lock();
}
void waitUntilDone() {
_doneLock.lock();
if (_done != _numTasks) {
_doneLock.unlock(); // Execution order 1: "waiter" thread is here
_mutex.lock(); // Execution order 3: "waiter" thread is now in a dealock?
}
else {
_doneLock.unlock();
}
}
void done() {
_doneLock.lock();
_done++;
if (_done == _numTasks) {
_mutex.unlock(); // Execution order 2: "task1" thread unlocks the mutex
}
_doneLock.unlock();
}
void cancel() {
_canceled = true;
_mutex.unlock();
}
bool wasCanceled() {
return _canceled;
}
bool isDone() {
return _done >= _numTasks;
}
int getNumTasks() {
return _numTasks;
}
private:
std::atomic<int> _numTasks;
std::atomic<int> _done;
std::atomic<bool> _canceled;
// mutex used for caller wait state
std::mutex _mutex;
// mutex used to safeguard done counter with lock condition in waitUntilDone
std::mutex _doneLock;
};
One possible solution which just came to my mind when editing the question is that I can put _done++; before the _doneLock(). Eventually, this should be enough?
Update
I've updated the Ticket class based on the suggestions provided by Tomer and Phil1970. Does the following implementation avoid mentioned pitfalls?
class Ticket {
public:
Ticket(int numTasks = 1) : _numTasks(numTasks), _done(0), _canceled(false) { }
void waitUntilDone() {
std::unique_lock<std::mutex> lock(_mutex);
// loop to avoid spurious wakeups
while (_done != _numTasks && !_canceled) {
_condVar.wait(lock);
}
}
void done() {
std::unique_lock<std::mutex> lock(_mutex);
// just bail out in case we call done more often than needed
if (_done == _numTasks) {
return;
}
_done++;
_condVar.notify_one();
}
void cancel() {
std::unique_lock<std::mutex> lock(_mutex);
_canceled = true;
_condVar.notify_one();
}
const bool wasCanceled() const {
return _canceled;
}
const bool isDone() const {
return _done >= _numTasks;
}
const int getNumTasks() const {
return _numTasks;
}
private:
std::atomic<int> _numTasks;
std::atomic<int> _done;
std::atomic<bool> _canceled;
std::mutex _mutex;
std::condition_variable _condVar;
};
Don't write your own wait methods but use std::condition_variable instead.
https://en.cppreference.com/w/cpp/thread/condition_variable.
Mutexes usage
Generally, a mutex should protect a given region of code. That is, it should lock, do its work and unlock. In your class, you have multiple method where some lock _mutex while other unlock it. This is very error-prone as if you call the method in the wrong order, you might well be in an inconsistant state. What happen if a mutex is lock twice? or unlocked when already unlocked?
The other thing to be aware with mutex is that if you have multiple mutexes, it that you can easily have deadlock if you need to lock both mutexes but don't do it in consistant order. Suppose that thread A lock mutex 1 first and the mutex 2, and thread B lock them in the opposite order (mutex 2 first). There is a possibility that something like this occurs:
Thread A lock mutex 1
Thread B lock mutex 2
Thread A want to lock mutex 2 but cannot as it is already locked.
Thread B want to lock mutex 1 but cannot as it is already locked.
Both thread will wait forever
So in your code, you should at least have some checks to ensure proper usage. For example, you should verify _canceled before unlocking the mutex to ensure cancel is called only once.
Solution
I will just gave some ideas
Declare a mutux and a condition_variable to manage the done condition in your class.
std::mutex doneMutex;
std::condition_variable done_condition;
Then waitUntilDone would look like:
void waitUntilDone()
{
std::unique_lock<std::mutex> lk(doneMutex);
done_condition.wait(lk, []{ return isDone() || wasCancelled();});
}
And done function would look like:
void done()
{
std::lock_guard<std::mutex> lk(doneMutex);
_done++;
if (_done == _numTasks)
{
doneCondition.notify_one();
}
}
And cancel function would become
void done()
{
std::lock_guard<std::mutex> lk(doneMutex);
_cancelled = true;
doneCondition.notify_one();
}
As you can see, you only have one mutex now so you basically eliminate the possibility of a deadlock.
Variable naming
I suggest you to not use lock in the name of you mutex since it is confusing.
std::mutex someMutex;
std::guard_lock<std::mutex> someLock(someMutex); // std::unique_lock when needed
That way, it is far easier to know which variable refer to the mutex and which one to the lock of the mutex.
Good reading
If you are serious about multithreading, then you should buy that book:
C++ Concurrency in Action
Practical Multithreading
Anthony Williams
Code Review (added section)
Essentially same code has beed posted to CODE REVIEW: https://codereview.stackexchange.com/questions/225863/multithreading-ticket-class-to-wait-for-parallel-task-completion/225901#225901.
I have put an answer there that include some extra points.
You not need to use mutex for operate with atomic values
UPD
my answer to mainn question was wrong. I deleted one.
You can use simple (non atomic) int _numTasks; also. And you not need shared pointer - just create Task on the stack and pass pointer
Ticket ticket(2);
auto task1 = std::make_unique<Query>(&ticket, str+"a");
addTaskToWorker(task1);
or unique ptr if you like
auto ticket = std::make_unique<Ticket>(2);
auto task1 = std::make_unique<Query>(ticket.get(), str+"a");
addTaskToWorker(task1);
because shared pointer can be cut by Occam's razor :)
Related
The Test class is used in a multithreaded enviroment. ThreadA asks if he has to wait for ThreadB by calling the hasToWait method (ITestWaiter). When ThreadB has done his work he notify`s all waiters by calling the Test::notify method.
Could you tell me if there is a possible deadlock situation in the wait() method - between the part, which is locked by the mutex and the call to the semaphore acquire method?
struct Semaphore {
bool acquire() { return WaitForSingleObject(sem, INFINITE); }
private:
Handle sem;
};
struct Test
{
bool wait(std::mutex mutex, const ITestWaiter *obj);
bool notify(std::mutex mutex);
private:
std::vector<Semaphore> waiters;
};
bool Test::wait(std::mutex mutex, const ITestWaiter *obj) {
Semaphore* sem;
{
std::unique_lock<std::mutex> mlock(mutex);
if (!obj->hasToWait())
return false;
sem = createSemaphoreAndPushBackToVector();
}
try {
sem->acquire();
}
catch (std::exception e) {}
return true;
}
bool Test::notify(std::mutex mutex) {
std::unique_lock<std::mutex> mlock(mutex);
//notify waiters by releasing the semaphore
return true;
}
From the code you posted, there shouldn't be a problem: In both cases, you do not block during the time you hold the lock; you just do some small actions (once modify the vector, once iterate over it) instead. But there's code you didn't show!
First, there's how you are going to notify. I assume you use CreateEvent to get the handle and SetEvent for notification – if so, no problem either.
Then, there's the hasToWait function. Suspicious: You are calling it while already holding the lock! Is there any reason for? Does hasToWait some locking, too? Does the other thread possibly try to lock the same facility? Then risk of deadlock exists if both threads do not acquire the locks in the same order.
If there's no separate locking involved, but hasToWait needs to access some resources that need to be protected by the same mutex, then the code as is is fine, too.
If there's no locking and no access to shared resources, then locking the mutex first is in vain and just requires time; in this case, first checking is more efficient:
if (obj->hasToWait())
{
Semaphore* sem;
{
std::unique_lock<std::mutex> mlock(mutex);
sem = createSemaphoreAndPushBackToVector();
}
try
{
sem->acquire();
}
catch (std::exception e)
{ }
}
I am using C++11 and I have a std::thread which is a class member, and it sends information to listeners every 2 minutes. Other that that it just sleeps. So, I have made it sleep for 2 minutes, then send the required info, and then sleep for 2 minutes again.
// MyClass.hpp
class MyClass {
~MyClass();
RunMyThread();
private:
std::thread my_thread;
std::atomic<bool> m_running;
}
MyClass::RunMyThread() {
my_thread = std::thread { [this, m_running] {
m_running = true;
while(m_running) {
std::this_thread::sleep_for(std::chrono::minutes(2));
SendStatusInfo(some_info);
}
}};
}
// Destructor
~MyClass::MyClass() {
m_running = false; // this wont work as the thread is sleeping. How to exit thread here?
}
Issue:
The issue with this approach is that I cannot exit the thread while it is sleeping. I understand from reading that I can wake it using a std::condition_variable and exit gracefully? But I am struggling to find a simple example which does the bare minimum as required in above scenario. All the condition_variable examples I've found look too complex for what I am trying to do here.
Question:
How can I use a std::condition_variable to wake the thread and exit gracefully while it is sleeping? Or are there any other ways of achieving the same without the condition_variable technique?
Additionally, I see that I need to use a std::mutex in conjunction with std::condition_variable? Is that really necessary? Is it not possible to achieve the goal by adding the std::condition_variable logic only to required places in the code here?
Environment:
Linux and Unix with compilers gcc and clang.
How can I use an std::condition_variable to wake the thread and exit gracefully while it was sleeping? Or are there any other ways of achieving the same without condition_variable technique?
No, not in standard C++ as of C++17 (there are of course non-standard, platform-specific ways to do it, and it's likely some kind of semaphore will be added to C++2a).
Additionally, I see that I need to use a std::mutex in conjunction with std::condition_variable? Is that really necessary?
Yes.
Is it not possible to achieve the goal by adding the std::condition_variable logic only to required places in the code piece here?
No. For a start, you can't wait on a condition_variable without locking a mutex (and passing the lock object to the wait function) so you need to have a mutex present anyway. Since you have to have a mutex anyway, requiring both the waiter and the notifier to use that mutex isn't such a big deal.
Condition variables are subject to "spurious wake ups" which means they can stop waiting for no reason. In order to tell if it woke because it was notified, or woke spuriously, you need some state variable that is set by the notifying thread and read by the waiting thread. Because that variable is shared by multiple threads it needs to be accessed safely, which the mutex ensures.
Even if you use an atomic variable for the share variable, you still typically need a mutex to avoid missed notifications.
This is all explained in more detail in
https://github.com/isocpp/CppCoreGuidelines/issues/554
A working example for you using std::condition_variable:
struct MyClass {
MyClass()
: my_thread([this]() { this->thread(); })
{}
~MyClass() {
{
std::lock_guard<std::mutex> l(m_);
stop_ = true;
}
c_.notify_one();
my_thread.join();
}
void thread() {
while(this->wait_for(std::chrono::minutes(2)))
SendStatusInfo(some_info);
}
// Returns false if stop_ == true.
template<class Duration>
bool wait_for(Duration duration) {
std::unique_lock<std::mutex> l(m_);
return !c_.wait_for(l, duration, [this]() { return stop_; });
}
std::condition_variable c_;
std::mutex m_;
bool stop_ = false;
std::thread my_thread;
};
How can I use an std::condition_variable to wake the thread and exit gracefully while it was sleeping?
You use std::condition_variable::wait_for() instead of std::this_thread::sleep_for() and first one can be interrupted by std::condition_variable::notify_one() or std::condition_variable::notify_all()
Additionally, I see that I need to use a std::mutex in conjunction with std::condition_variable? Is that really necessary? Is it not possible to achieve the goal by adding the std::condition_variable logic only to required places in the code piece here?
Yes it is necessary to use std::mutex with std::condition_variable and you should use it instead of making your flag std::atomic as despite atomicity of flag itself you would have race condition in your code and you will notice that sometimes your sleeping thread would miss notification if you would not use mutex here.
There is a sad, but true fact - what you are looking for is a signal, and Posix threads do not have a true signalling mechanism.
Also, the only Posix threading primitive associated with any sort of timing is conditional variable, this is why your online search lead you to it, and since C++ threading model is heavily built on Posix API, in standard C++ Posix-compatible primitives is all you get.
Unless you are willing to go outside of Posix (you do not indicate platform, but there are native platform ways to work with events which are free from those limitations, notably eventfd in Linux) you will have to stick with condition variables and yes, working with condition variable requires a mutex, since it is built into API.
Your question doesn't specifically ask for code sample, so I am not providing any. Let me know if you'd like some.
Additionally, I see that I need to use a std::mutex in conjunction with std::condition_variable? Is that really necessary? Is it not possible to achieve the goal by adding the std::condition_variable logic only to required places in the code piece here?
std::condition_variable is a low level primitive. Actually using it requires fiddling with other low level primitives as well.
struct timed_waiter {
void interrupt() {
auto l = lock();
interrupted = true;
cv.notify_all();
}
// returns false if interrupted
template<class Rep, class Period>
bool wait_for( std::chrono::duration<Rep, Period> how_long ) const {
auto l = lock();
return !cv.wait_until( l,
std::chrono::steady_clock::now() + how_long,
[&]{
return !interrupted;
}
);
}
private:
std::unique_lock<std::mutex> lock() const {
return std::unique_lock<std::mutex>(m);
}
mutable std::mutex m;
mutable std::condition_variable cv;
bool interrupted = false;
};
simply create a timed_waiter somewhere both the thread(s) that wants to wait, and the code that wants to interrupt, can see it.
The waiting threads do
while(m_timer.wait_for(std::chrono::minutes(2))) {
SendStatusInfo(some_info);
}
to interrupt do m_timer.interrupt() (say in the dtor) then my_thread.join() to let it finish.
Live example:
struct MyClass {
~MyClass();
void RunMyThread();
private:
std::thread my_thread;
timed_waiter m_timer;
};
void MyClass::RunMyThread() {
my_thread = std::thread {
[this] {
while(m_timer.wait_for(std::chrono::seconds(2))) {
std::cout << "SendStatusInfo(some_info)\n";
}
}};
}
// Destructor
MyClass::~MyClass() {
std::cout << "~MyClass::MyClass\n";
m_timer.interrupt();
my_thread.join();
std::cout << "~MyClass::MyClass done\n";
}
int main() {
std::cout << "start of main\n";
{
MyClass x;
x.RunMyThread();
using namespace std::literals;
std::this_thread::sleep_for(11s);
}
std::cout << "end of main\n";
}
Or are there any other ways of achieving the same without the condition_variable technique?
You can use std::promise/std::future as a simpler alternative to a bool/condition_variable/mutex in this case. A future is not susceptible to spurious wakes and doesn't require a mutex for synchronisation.
Basic example:
std::promise<void> pr;
std::thread thr{[fut = pr.get_future()]{
while(true)
{
if(fut.wait_for(std::chrono::minutes(2)) != std::future_status::timeout)
return;
}
}};
//When ready to stop
pr.set_value();
thr.join();
Or are there any other ways of achieving the same without condition_variable technique?
One alternative to a condition variable is you can wake your thread up at much more regular intervals to check the "running" flag and go back to sleep if it is not set and the allotted time has not yet expired:
void periodically_call(std::atomic_bool& running, std::chrono::milliseconds wait_time)
{
auto wake_up = std::chrono::steady_clock::now();
while(running)
{
wake_up += wait_time; // next signal send time
while(std::chrono::steady_clock::now() < wake_up)
{
if(!running)
break;
// sleep for just 1/10 sec (maximum)
auto pre_wake_up = std::chrono::steady_clock::now() + std::chrono::milliseconds(100);
pre_wake_up = std::min(wake_up, pre_wake_up); // don't overshoot
// keep going to sleep here until full time
// has expired
std::this_thread::sleep_until(pre_wake_up);
}
SendStatusInfo(some_info); // do the regular call
}
}
Note: You can make the actual wait time anything you want. In this example I made it 100ms std::chrono::milliseconds(100). It depends how responsive you want your thread to be to a signal to stop.
For example in one application I made that one whole second because I was happy for my application to wait a full second for all the threads to stop before it closed down on exit.
How responsive you need it to be is up to your application. The shorter the wake up times the more CPU it consumes. However even very short intervals of a few milliseconds will probably not register much in terms of CPU time.
You could also use promise/future so that you don't need to bother with conditionnal and/or threads:
#include <future>
#include <iostream>
struct MyClass {
~MyClass() {
_stop.set_value();
}
MyClass() {
auto future = std::shared_future<void>(_stop.get_future());
_thread_handle = std::async(std::launch::async, [future] () {
std::future_status status;
do {
status = future.wait_for(std::chrono::seconds(2));
if (status == std::future_status::timeout) {
std::cout << "do periodic things\n";
} else if (status == std::future_status::ready) {
std::cout << "exiting\n";
}
} while (status != std::future_status::ready);
});
}
private:
std::promise<void> _stop;
std::future<void> _thread_handle;
};
// Destructor
int main() {
MyClass c;
std::this_thread::sleep_for(std::chrono::seconds(9));
}
I am using std::conditional_variable for timing a signal in a multi-threaded program for controlling the flow of various critical sections. The program works but during exit I am compelled to use a predicate (kill_ == true) to avoid destroying of threads which are still waiting on the std::conditional_variable ::wait(). I don't know if its the proper way to destroy all the waiting threads, advice solicited. Here's a code snippet:
class timer
{
// ...
timer(std::shared_ptr<parent_object> parent,const bool& kill)
:parent_(parent),kill_(kill){}
private:
std::condition_variable cv_command_flow_;
std::mutex mu_flow_;
const bool& kill_;
std::shared_ptr<parent_object> parent_;
};
void timer::section()
{
auto delay = get_next_delay();
std::unique_lock<std::mutex> lock(mu_flow_);
std::cv_command_flow_.wait_until(lock,delay,[] { return kill_ == true; });
if( kill_) return;
parent_->trigger();
std::cv_command_exec_.notify_all();
}
This is generally how I handle the destruction of my waiting threads. You'll want a code section such as this where you want to perform clean up (in a class destructor, the main thread before process exit, etc.):
{
std::lock_guard<std::mutex> lock(mu_flow);
kill_ = true;
}
cv_command_exec_.notify_all();
thread1.join();
I'm assuming that timer::section() was executing within some thread std::thread thread1.
Ownership duration of the mutex is controlled by the scoped block. You'll want the mutex held only when you set kill_ = true and released before you call .notify_all() (otherwise the woken thread might find the lock still held and go back to sleep).
Of course, std::unique_lock usage would look like:
std::unique_lock<std::mutex> lock(mu_flow);
kill_ = true;
lock.unlock();
cv_command_exec_.notify_all();
thread1.join();
It's personal preference to a large degree ... both code sections accomplish the same task.
This is a separate question but related to the previous question I asked here
I am using an std::thread in my C++ code to constantly poll for some data & add it to a buffer. I use a C++ lambda to start the thread like this:
StartMyThread() {
thread_running = true;
the_thread = std::thread { [this] {
while(thread_running) {
GetData();
}
}};
}
thread_running is an atomic<bool> declared in class header. Here is my GetData function:
GetData() {
//Some heavy logic
}
Next I also have a StopMyThread function where I set thread_running to false so that it exits out of the while loop in the lambda block.
StopMyThread() {
thread_running = false;
the_thread.join();
}
As I understand, I can pause & resume the thread using a std::condition_variable as pointed out here in my earlier question.
But is there a disadvantage if I just use the std::atomic<bool> thread_running to execute or not execute the logic in GetData() like below ?
GetData() {
if (thread_running == false)
return;
//Some heavy logic
}
Will this burn more CPU cycles compared to the approach of using an std::condition_variable as described here ?
A condition variable is useful when you want to conditionally halt another thread or not. So you might have an always-running "worker" thread that waits when it notices it has nothing to do to be running.
The atomic solution requires your UI interaction synchronize with the worker thread, or very complex logic to do it asynchronously.
As a general rule, your UI response thread should never block on non-ready state from worker threads.
struct worker_thread {
worker_thread( std::function<void()> t, bool play = true ):
task(std::move(t)),
execute(play)
{
thread = std::async( std::launch::async, [this]{
work();
});
}
// move is not safe. If you need this movable,
// use unique_ptr<worker_thread>.
worker_thread(worker_thread&& )=delete;
~worker_thread() {
if (!exit) finalize();
wait();
}
void finalize() {
auto l = lock();
exit = true;
cv.notify_one();
}
void pause() {
auto l = lock();
execute = false;
}
void play() {
auto l = lock();
execute = true;
cv.notify_one();
}
void wait() {
Assert(exit);
if (thread)
thread.get();
}
private:
void work() {
while(true) {
bool done = false;
{
auto l = lock();
cv.wait( l, [&]{
return exit || execute;
});
done = exit; // have lock here
}
if (done) break;
task();
}
}
std::unique_lock<std::mutex> lock() {
return std::unique_lock<std::mutex>(m);
}
std::mutex m;
std::condition_variable cv;
bool exit = false;
bool execute = true;
std::function<void()> task;
std::future<void> thread;
};
or somesuch.
This owns a thread. The thread repeatedly runs task so long as it is in play() mode. If you pause() the next time task() finishes, the worker thread stops. If you play() before the task() call finishes, it doesn't notice the pause().
The only wait is on destruction of worker_thread, where it automatically informs the worker thread it should exit and it waits for it to finish.
You can manually .wait() or .finalize() as well. .finalize() is async, but if your app is shutting down you can call it early and give the worker thread more time to clean up while the main thread cleans things up elsewhere.
.finalize() cannot be reversed.
Code not tested.
Unless I'm missing something, you already answered this in your original question: You'll be creating and destroying the worker thread each time it's needed. This may or may not be an issue in your actual application.
There's two different problems being solved and it may depend on what you're actually doing. One problem is "I want my thread to run until I tell it to stop." The other seems to be a case of "I have a producer/consumer pair and want to be able to notify the consumer when data is ready." The thread_running and join method works well for the first of those. The second you may want to use a mutex and condition because you're doing more than just using the state to trigger work. Suppose you have a vector<Work>. You guard that with the mutex, so the condition becomes [&work] (){ return !work.empty(); } or something similar. When the wait returns, you hold the mutex so you can take things out of work and do them. When you're done, you go back to wait, releasing the mutex so the producer can add things to the queue.
You may want to combine these techniques. Have a "done processing" atomic that all of your threads periodically check to know when to exit so that you can join them. Use the condition to cover the case of data delivery between threads.
I'm looking for a way to wait for multiple condition variables.
ie. something like:
boost::condition_variable cond1;
boost::condition_variable cond2;
void wait_for_data_to_process()
{
boost::unique_lock<boost::mutex> lock(mut);
wait_any(lock, cond1, cond2); //boost only provides cond1.wait(lock);
process_data();
}
Is something like this possible with condition variables. And if not are there alternative solutions?
Thanks
I don't believe you can do anything like this with boost::thread. Perhaps because POSIX condition variables don't allow this type of construct. Of course, Windows has WaitForMultipleObjects as aJ posted, which could be a solution if you're willing to restrict your code to Windows synchronization primitives.
Another option would to use fewer condition variables: just have 1 condition variable that you fire when anything "interesting" happens. Then, any time you want to wait, you run a loop that checks to see if your particular situation of interest has come up, and if not, go back to waiting on the condition variable. You should be waiting on those condition variables in such a loop anyways, as condition variable waits are subject to spurious wakeups (from boost::thread docs, emphasis mine):
void wait(boost::unique_lock<boost::mutex>& lock)
...
Effects:
Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), or spuriously. ...
As Managu already answered, you can use the same condition variable and check for multiple "events" (bool variables) in your while loop. However, concurrent access to these bool variables must be protected using the same mutex that the condvar uses.
Since I already went through the trouble of typing this code example for a related question, I'll repost it here:
boost::condition_variable condvar;
boost::mutex mutex;
bool finished1 = false;
bool finished2 = false;
void longComputation1()
{
{
boost::lock_guard<boost::mutex> lock(mutex);
finished1 = false;
}
// Perform long computation
{
boost::lock_guard<boost::mutex> lock(mutex);
finished1 = true;
}
condvar.notify_one();
}
void longComputation2()
{
{
boost::lock_guard<boost::mutex> lock(mutex);
finished2 = false;
}
// Perform long computation
{
boost::lock_guard<boost::mutex> lock(mutex);
finished2 = true;
}
condvar.notify_one();
}
void somefunction()
{
// Wait for long computations to finish without "spinning"
boost::lock_guard<boost::mutex> lock(mutex);
while(!finished1 && !finished2)
{
condvar.wait(lock);
}
// Computations are finished
}
alternative solutions?
I am not sure of Boost library but you can use WaitForMultipleObjects Function to wait for multiple kernel objects. Just check if this helps.
As Managu points out using multiple conditions might not be a good solution in the first place. What you want to do should be possible to be implemented using Semaphores.
Using the same condition variable for multiple events technically works, but it doesn't allow encapsulation. So I had an attempt at making a class that supports it. Not tested yet! Also it doesn't support notify_one() as I haven't worked out how to implement that.
#pragma once
#include <condition_variable>
#include <unordered_set>
// This is like a `condition_variable` but you can wait on multiple `multi_condition_variable`s.
// Internally it works by creating a new `condition_variable` for each `wait_any()` and registering
// it with the target `multi_condition_variable`s. When `notify_all()` is called, the main `condition_variable`
// is notified, as well as all the temporary `condition_variable`s created by `wait_any()`.
//
// There are two caveats:
//
// 1. You can't call the destructor if any threads are `wait()`ing. This is difficult to get around but
// it is the same as `std::wait_condition` anyway.
//
// 2. There is no `notify_one()`. You can *almost* implement this, but the only way I could think to do
// it was to add an `atomic_int` that indicates the number of waits(). Unfortunately there is no way
// to atomically increment it, and then wait.
class multi_condition_variable
{
public:
multi_condition_variable()
{
}
// Note that it is only safe to invoke the destructor if no thread is waiting on this condition variable.
~multi_condition_variable()
{
}
// Notify all threads calling wait(), and all wait_any()'s that contain this instance.
void notify_all()
{
_condition.notify_all();
for (auto o : _others)
o->notify_all();
}
// Wait for notify_all to be called, or a spurious wake-up.
void wait(std::unique_lock<std::mutex>& loc)
{
_condition.wait(loc);
}
// Wait for any of the notify_all()'s in `cvs` to be called, or a spurious wakeup.
static void wait_any(std::unique_lock<std::mutex>& loc, std::vector<std::reference_wrapper<multi_condition_variable>> cvs)
{
std::condition_variable c;
for (multi_condition_variable& cv : cvs)
cv.addOther(&c);
c.wait(loc);
for (multi_condition_variable& cv : cvs)
cv.removeOther(&c);
}
private:
void addOther(std::condition_variable* cv)
{
std::lock_guard<std::mutex> lock(_othersMutex);
_others.insert(cv);
}
void removeOther(std::condition_variable* cv)
{
// Note that *this may have been destroyed at this point.
std::lock_guard<std::mutex> lock(_othersMutex);
_others.erase(cv);
}
// The condition variable.
std::condition_variable _condition;
// When notified, also notify these.
std::unordered_set<std::condition_variable*> _others;
// Mutex to protect access to _others.
std::mutex _othersMutex;
};
// Example use:
//
// multi_condition_variable cond1;
// multi_condition_variable cond2;
//
// void wait_for_data_to_process()
// {
// unique_lock<boost::mutex> lock(mut);
//
// multi_condition_variable::wait_any(lock, {cond1, cond2});
//
// process_data();
// }