If data race is not an issue, can I use std::condition_variable for starting (i.e., signaling) and stopping (i.e, wait) a thread for work?
For example:
std::atomic<bool> quit = false;
std::atomic<bool> work = false;
std::mutex mtx;
std::condition_variable cv;
// if work, then do computation, otherwise wait on work (or quit) to become true
// thread reads: work, quit
void thread1()
{
while ( !quit )
{
// limiting the scope of the mutex
{
std::unique_lock<std::mutex> lck(mtx);
// I want here is to wait on this lambda
cv.wait(lck, []{ return work || quit; });
}
if ( work )
{
// work can become false again while working.
// I want here is to complete the work
// then wait on the next iteration.
ComputeWork();
}
}
}
// work controller
// thread writes: work, quit
void thread2()
{
if ( keyPress == '1' )
{
// is it OK not to use a mutex here?
work = false;
}
else if ( keyPress == '2' )
{
// ... or here?
work = true;
cv.notify_all();
}
else if ( keyPress == ESC )
{
// ... or here?
quit = true;
cv.notify_all();
}
}
Update/Summary: not safe because of 'lost wakeup' scenario that Adam describes.
cv.wait(lck, predicate()); can be equivalently written as while(!predicate()){ cv.wait(lck); }.
To see the problem easier: while(!predicate()){ /*lost wakeup can occur here*/ cv.wait(lck); }
Can be fixed by putting any read/writes of predicate variables in the mutex scope:
void thread2()
{
if ( keyPress == '1' )
{
std::unique_lock<std::mutex> lck(mtx);
work = false;
}
else if ( keyPress == '2' )
{
std::unique_lock<std::mutex> lck(mtx);
work = true;
cv.notify_all();
}
else if ( keyPress == ESC )
{
std::unique_lock<std::mutex> lck(mtx);
quit = true;
cv.notify_all();
}
}
No, not safe. The waiting thread can get the mutex, check the predicate, sees nothing to wake up for. Then the signalling thread sets the bool, and signals. Next, the waiting thread blocks on the cv, and never awakens.
You must hold the mutex at some point between triggering the wakeup lambda condition, and notifying the cv, to avoid this.
The "down" case (turning off wakeup) I have not looked at, and it may depend on what behaviour exactly is ok. Without that specified in a formal sense I wouldn't do it either; in general, you should at least attempt sketches of formal proofs of correctness when fiddling with multi threaded code, or your code will be at best accidentally working.
If you can't do that, find someone who can to write that code for you.
Related
I want to check in one thread A if a condition is met,
if the condition is true I want another thread B to execute my code, once that is done, I want thread B to wait until that condition is true again, then it executes the code again, and so on. There is enough time to execute all the code in thread B before the condition is false. Basically thread A runs at normal speed, thread B only runs when thread A tells it it can run. And I don't want to spawn a new thread B all the time, it shouldn't stop, it should just execute it's code and then wait until it's allowed to execute it's code again.
How can I do that? Below is what I have so far, but I don't how to run mainExecution() in this type of loop?
std::mutex m;
std::condition_variable cv_can_execute;
bool b_can_execute = false;
void mainExection() {
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
doSomethingElse();
}
void canExecute() {
std::unique_lock lk(m);
while (true) {
condition = canRun();
if (condition) {
b_can_execute = true;
cv_can_execute.notify_all();
}
else {
b_can_execute = false;
}
}
b_add_done = true;
cv_add_done.notify_all();
}
int main() {
std::thread canExec(canExecute);
std::thread mainExec(mainExection);
canExec.join();
mainExec.join();
}
In your code both threads immediately lock mutex m, so only one can run at a time.
That's why you don't see the behavior you expect.
You should only lock the mutex when you want to touch shared memory,in your case b_can_execute. The code should look something like this:
void mainExection() {
{
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
} // Here the lock is released so A can do work.
doSomethingElse();
}
void canExecute() {
// std::unique_lock lk(m); Remove this
while (true) {
condition = canRun();
if (condition) {
{
std::unique_lock lk(m); // Lock to change shred variable.
b_can_execute = true;
} // Unlock here, so B can run
// It's best to unlock before you notify, so that B doesn't wake just to block again.
cv_can_execute.notify_all();
}
else {
std::unique_lock lk(m);
b_can_execute = false;
}
}
{
std::unique_lock lk(m);
b_add_done = true;
}
cv_add_done.notify_all();
}
Now, in your case you only lock the mutex to synchronize on a bool. This is usually seen as overkill as the cost of lock and unlocking is relatively high. You could try to look at atomic variables which would replace your bool and allow the threads to synchronize without the use of the mutex.
I have a groups of objects, each object has two threads: Task thread processes the data and notifies Decision thread that the data is ready, then waits for Decision thread to make the decision whether to continue operations; Decision thread waits Task thread for the data, then consumes the data and make a decision ( notify Task thread that the decision is ready to fetch ).
Task.cpp:
class Task{
public:
void DoTask(){
// process data
{
std::unique_lock<std::mutex> lck(mtx);
data_ready = true;
cv_data.notify_one();
while( decision_ready == false )
cv_decision.wait( lck );
}
if ( decision )
// continue task
else
// quit
}
void SetDecision( bool flag ) { decision = flag; }
bool GetDataFlag() const { return data_ready; }
bool SetDecisionFlag( bool flag ) { decision_ready = flag; }
std::mutex mtx;
std::condition_variable cv_data;
std::condition_variable cv_decision;
private:
bool decision;
bool data_ready;
bool decision_ready;
};
main.cpp:
void Decision ( Task *task );
int main(){
Task mytask[10];
std::thread do[10];
std::thread decision[10];
for(int i=0; i< 10; ++i)
{
do[i] = std::thread( &Task::doTask, &mytask[i] );
decision[i] = std::thread( Decision, &mytask[i] );
do[i].detach();
decision[i].detach();
}
}
void Decision( Task *task )
{
st::mutex mtx_decision;
std::unique_lock<std::mutex> lck( task->mtx );
while( task->GetDataFlag() == false )
task->cv_data.wait(lck);
std::lock_guard<std::mutex> lk(mtx_decision);
// check database and make decision
task->SetDecision( true );
task->SetDecisionFlag( true );
task->cv_decision.notify_one();
}
What is the problem with this approach? The program works well only in single thread case. If I actually open two or more threads, I get segmentation fault. I am not sure how to pass the condition variables between different scopes. And hope someone can tell me the right way to do it. Thanks.
I suppose you need the same mutex and same conditional variable to get it working. Now each class gets own mutex and condition_variable and each decision too.
The most likely reason while your application crashes is because you detach your threads and than your main() exits, killing threads in the midst of what they are doing. I strongly advice against using detached threads.
I have the following code:
Main thread notifies worker thread to start/stop some job.In the main thread the trigger is some UI button(Qt SDK in this case):
void PlaySlot(bool checked){
boost::unique_lock<boost::mutex> lock(m_mutex);
if(checked == true){
m_isPlayMode = true;
m_event.notify_one(); //tell thread to start playing.
}else{
m_isPlayMode = false;
}
}
Now,in the worker thread,once m_isPlayMode becomes true, some loop starts running for a limited period of time and it will exit when the time is finished or m_isPlayMode becomes false.
Inside the thread operator:
while(true)
{
boost::unique_lock<boost::mutex> lock(m_mutex);
m_event.wait(lock); //wait for next event
if(m_isPlayMode == true){
while(m_frameIndex< totalFrames && m_isPlayMode){
m_frameIndex++;
///do some work
}
m_isPlayMode = false;
emit playEnded(false);
}
}
Now,what is happening that after the loop starts playing,when PlaySlot() gets triggered with 'checked' = false it doesn't update m_isPlayMode and the program becomes unresponsive.I suspect that's condition race issue as I am trying to lock mutex which is already locked in the thread loop.
I solved it by removing unique_lock from PlaySlot method and converting m_isPlayMode to atomic variable.It works.
But I want to know 2 things:
Are there any perils in such a solution.
Can it be solved in another way?
Note that m_isPlayMode is protected by the same mutex, hence can't be updated when the worker is running. Use two separate mutexes for these, or atomics.
Edit: Fast fix would probably to add a second mutex:
void PlaySlot(bool checked){
boost::unique_lock<boost::mutex> lock(m_isPlayModeMutex); // <--
// ...
}
worker thread:
for (;;) {
boost::unique_lock<boost::mutex> lock(m_mutex);
m_event.wait(lock); // wait for next event
boost::unique_lock<boost::mutex> playModeLock(m_isPlayModeLock);
if(m_isPlayMode == true){
while(m_frameIndex< totalFrames && m_isPlayMode){
playModeLock.unlock();
/// ... (not locked here)
playModeLock.lock();
}
m_isPlayMode = false;
emit playEnded(false);
}
}
This application is recursive multi-thread detached one. Each thread regenerate
new bunch of threads before it dies.
Option 1 (works) however it's a shared resource hence slows the application down.
Option 2 should remove this bottleneck.
Option 1 works:
std::condition_variable cv;
bool ready = false;
std::mutex mu;
// go triggers the thread's function
void go() {
std::unique_lock<std::mutex> lck( mu );
ready = true;
cv.notify_all();
}
void ThreadFunc ( ...) {
std::unique_lock<std::mutex> lck ( mu );
cv.wait(lck, []{return ready;});
do something useful
}
Option 2 does NOT trigger the thread:
std::array<std::mutex, DUToutputs*MaxGnodes> arrMutex ;
void go ( long m , long Channel )
{
std::unique_lock<std::mutex> lck( arrMutex[m+MaxGnodes*Channel] );
ready = true;
cv.notify_all();
}
void ThreadFunc ( ...) {
std::unique_lock<std::mutex> lck ( arrMutex[Inst+MaxGnodes*Channel] );
while (!ready) cv.wait(lck);
do something useful
}
How can I make option #2 work?
The code in Option 2 contains a so-called data race on the variable ready, because the read and write operations on this variable are no longer synchronized. The behaviour of programs with data races is undefined. You can remove the data race by changing bool ready to std::atomic<bool> ready.
That should already fix the problem in Option 2. However, if you use std::atomic, you can also make other optimizations:
std::atomic<bool> ready{false};
void go(long m, long Channel) {
// no lock required
ready = true;
cv.notify_all();
}
void ThreadFunc( ...) {
std::unique_lock<std::mutex> lck(arrMutex[Inst+MaxGnodes*Channel]);
cv.wait(lck, [] { return ready; });
// do something useful
}
I have three threads in my application, the first thread needs to wait for a data to be ready from the two other threads. The two threads are preparing the data concurrently.
In order to do that I am using condition variable in C++ as following:
boost::mutex mut;
boost::condition_variable cond;
Thread1:
bool check_data_received()
{
return (data1_received && data2_received);
}
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
if (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
}
Thread2:
{
boost::lock_guard<boost::mutex> lock(mut);
data1_received = true;
}
cond.notify_one();
Thread3:
{
boost::lock_guard<boost::mutex> lock(mut);
data2_received = true;
}
cond.notify_one();
So my question is it correct to do that, or is there any more efficient way? I am looking for the most optimized way to do the waiting.
It looks like you want a semaphore here, so you can wait for two "resources" to be "taken".
For now, just replace the mutual exclusion with an atomic. you can still use a cv to signal the waiter:
#include <boost/thread.hpp>
boost::mutex mut;
boost::condition_variable cond;
boost::atomic_bool data1_received(false);
boost::atomic_bool data2_received(false);
bool check_data_received()
{
return (data1_received && data2_received);
}
void thread1()
{
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
while (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
std::cout << "." << std::flush;
}
}
void thread2()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data1_received = true;
cond.notify_one();
}
void thread3()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data2_received = true;
cond.notify_one();
}
int main()
{
boost::thread_group g;
g.create_thread(thread1);
g.create_thread(thread2);
g.create_thread(thread3);
g.join_all();
}
Note:
warning - it's essential that you know only the waiter is waiting on the cv, otherwise you need notify_all() instead of notify_one().
It is not important that the waiter is already waiting before the workers signal their completion, because the predicated timed_wait checks the predicate before blocking.
Because this sample uses atomics and predicated wait, it's not actually critical to signal the cv under the mutex. However, thread checkers will (rightly) complain about this (I think) because it's impossible for them to check proper synchronization unless you add the locking.