C++ Boost :Variable sync between 2 threads - c++

I have the following code:
Main thread notifies worker thread to start/stop some job.In the main thread the trigger is some UI button(Qt SDK in this case):
void PlaySlot(bool checked){
boost::unique_lock<boost::mutex> lock(m_mutex);
if(checked == true){
m_isPlayMode = true;
m_event.notify_one(); //tell thread to start playing.
}else{
m_isPlayMode = false;
}
}
Now,in the worker thread,once m_isPlayMode becomes true, some loop starts running for a limited period of time and it will exit when the time is finished or m_isPlayMode becomes false.
Inside the thread operator:
while(true)
{
boost::unique_lock<boost::mutex> lock(m_mutex);
m_event.wait(lock); //wait for next event
if(m_isPlayMode == true){
while(m_frameIndex< totalFrames && m_isPlayMode){
m_frameIndex++;
///do some work
}
m_isPlayMode = false;
emit playEnded(false);
}
}
Now,what is happening that after the loop starts playing,when PlaySlot() gets triggered with 'checked' = false it doesn't update m_isPlayMode and the program becomes unresponsive.I suspect that's condition race issue as I am trying to lock mutex which is already locked in the thread loop.
I solved it by removing unique_lock from PlaySlot method and converting m_isPlayMode to atomic variable.It works.
But I want to know 2 things:
Are there any perils in such a solution.
Can it be solved in another way?

Note that m_isPlayMode is protected by the same mutex, hence can't be updated when the worker is running. Use two separate mutexes for these, or atomics.
Edit: Fast fix would probably to add a second mutex:
void PlaySlot(bool checked){
boost::unique_lock<boost::mutex> lock(m_isPlayModeMutex); // <--
// ...
}
worker thread:
for (;;) {
boost::unique_lock<boost::mutex> lock(m_mutex);
m_event.wait(lock); // wait for next event
boost::unique_lock<boost::mutex> playModeLock(m_isPlayModeLock);
if(m_isPlayMode == true){
while(m_frameIndex< totalFrames && m_isPlayMode){
playModeLock.unlock();
/// ... (not locked here)
playModeLock.lock();
}
m_isPlayMode = false;
emit playEnded(false);
}
}

Related

Handle mutex lock in callback c++

I've got a Timer class that can run with both an initial time and an interval. There's an internal function internalQuit performs thread.join() before a thread is started again on the resetCallback. The thing is that each public function has it's own std::lock_guard on the mutex to prevent the data of being written. I'm now running into an issue that when using the callback to for example stop the timer in the callback, the mutex cannot be locked by stop(). I'm hoping to get some help on how to tackle this issue.
class Timer
{
public:
Timer(string_view identifier, Function &&timeoutHandler, Duration initTime, Duration intervalTime);
void start()
void stop() // for example
{
std::lock_guard lock{mutex};
running = false;
sleepCv.notify_all();
}
void setInitTime()
void setIntervalTime()
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
startTimerThread(std::forward<Function>(timeoutHandler));
}
private:
internalQuit() // performs thread join
{
{
std::lock_guard lock {mutex};
quit = true;
running = false;
sleepCv.notify_all();
}
thread.join();
}
mainLoop(Function &&timeoutHandler)
{
while(!quit)
{
std::unique_lock lock{mutex};
// wait for running with sleepCv.wait()
// handle initTimer with sleepCv.wait_until()
timeoutHandler(); // callback
// handle intervalTimer with sleepCv.wait_until()
timeoutHandler(); // callback
}
}
startTimerThread(Function &&timeoutHandler)
{
thread = std::thread([&, timeoutHandler = std::forward<Function>(timeoutHandler)](){
mainLoop(timeoutHandler);
});
}
std::thread thread{};
std::mutex mutex{};
std::condition_variable sleepCv{}
// initTime, intervalTime and some booleans for updating with sleepCv.notify_all();
}
For testing this, I have the following testcase in Gtest. I'm expecting the timer to stop in the callback. Unfortunately, the timer will hang on acquiring the mutex lock in the stop() function.
std::atomic<int> callbackCounter;
void timerCallback()
{
callbackCounter.fetch_add(1, std::memory_order_acq_rel);
}
TEST(timerTest, timerShouldStopWhenStoppedInNewCallback)
{
std::atomic<int> testCounter{0};
Timer<std::chrono::steady_clock > t{"timerstop", &timerCallback, std::chrono::milliseconds(0), std::chrono::milliseconds(100)};
t.resetCallback([&]{
testCounter += 1;
t.stop();
});
t.start();
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // trigger due to original interval timeout
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // no trigger, because stopped in new callback
}
Removing all the mutexes in each of the public fucntions, fixes the issue. But that could lead to possible race conditions for data being written to variables. Hence each function has a lock before writing to f.e. the booleans.
I've tried looking into the std::move functionality to move the thread during the resetCallback into a different variable and then call join on that one. I'm also investigating recursive_mutex but have no experience with using that.
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
auto prevThread = std::thread(std::move(this->thread));
// didn't know how to continue from here, requiring more selfstudy.
startTimerThread(std::forward<Function>(timeoutHandler));
}
It's a new subject for me, have worked with mutexes and timers before but with relatively simple stuff.
Thank you in advance.

Is it possible to run a thread, that executes a function in a loop, only when a condition is met, which is also checked in a loop?

I want to check in one thread A if a condition is met,
if the condition is true I want another thread B to execute my code, once that is done, I want thread B to wait until that condition is true again, then it executes the code again, and so on. There is enough time to execute all the code in thread B before the condition is false. Basically thread A runs at normal speed, thread B only runs when thread A tells it it can run. And I don't want to spawn a new thread B all the time, it shouldn't stop, it should just execute it's code and then wait until it's allowed to execute it's code again.
How can I do that? Below is what I have so far, but I don't how to run mainExecution() in this type of loop?
std::mutex m;
std::condition_variable cv_can_execute;
bool b_can_execute = false;
void mainExection() {
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
doSomethingElse();
}
void canExecute() {
std::unique_lock lk(m);
while (true) {
condition = canRun();
if (condition) {
b_can_execute = true;
cv_can_execute.notify_all();
}
else {
b_can_execute = false;
}
}
b_add_done = true;
cv_add_done.notify_all();
}
int main() {
std::thread canExec(canExecute);
std::thread mainExec(mainExection);
canExec.join();
mainExec.join();
}
In your code both threads immediately lock mutex m, so only one can run at a time.
That's why you don't see the behavior you expect.
You should only lock the mutex when you want to touch shared memory,in your case b_can_execute. The code should look something like this:
void mainExection() {
{
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
} // Here the lock is released so A can do work.
doSomethingElse();
}
void canExecute() {
// std::unique_lock lk(m); Remove this
while (true) {
condition = canRun();
if (condition) {
{
std::unique_lock lk(m); // Lock to change shred variable.
b_can_execute = true;
} // Unlock here, so B can run
// It's best to unlock before you notify, so that B doesn't wake just to block again.
cv_can_execute.notify_all();
}
else {
std::unique_lock lk(m);
b_can_execute = false;
}
}
{
std::unique_lock lk(m);
b_add_done = true;
}
cv_add_done.notify_all();
}
Now, in your case you only lock the mutex to synchronize on a bool. This is usually seen as overkill as the cost of lock and unlocking is relatively high. You could try to look at atomic variables which would replace your bool and allow the threads to synchronize without the use of the mutex.

condition_variable usage for signaling and waiting

If data race is not an issue, can I use std::condition_variable for starting (i.e., signaling) and stopping (i.e, wait) a thread for work?
For example:
std::atomic<bool> quit = false;
std::atomic<bool> work = false;
std::mutex mtx;
std::condition_variable cv;
// if work, then do computation, otherwise wait on work (or quit) to become true
// thread reads: work, quit
void thread1()
{
while ( !quit )
{
// limiting the scope of the mutex
{
std::unique_lock<std::mutex> lck(mtx);
// I want here is to wait on this lambda
cv.wait(lck, []{ return work || quit; });
}
if ( work )
{
// work can become false again while working.
// I want here is to complete the work
// then wait on the next iteration.
ComputeWork();
}
}
}
// work controller
// thread writes: work, quit
void thread2()
{
if ( keyPress == '1' )
{
// is it OK not to use a mutex here?
work = false;
}
else if ( keyPress == '2' )
{
// ... or here?
work = true;
cv.notify_all();
}
else if ( keyPress == ESC )
{
// ... or here?
quit = true;
cv.notify_all();
}
}
Update/Summary: not safe because of 'lost wakeup' scenario that Adam describes.
cv.wait(lck, predicate()); can be equivalently written as while(!predicate()){ cv.wait(lck); }.
To see the problem easier: while(!predicate()){ /*lost wakeup can occur here*/ cv.wait(lck); }
Can be fixed by putting any read/writes of predicate variables in the mutex scope:
void thread2()
{
if ( keyPress == '1' )
{
std::unique_lock<std::mutex> lck(mtx);
work = false;
}
else if ( keyPress == '2' )
{
std::unique_lock<std::mutex> lck(mtx);
work = true;
cv.notify_all();
}
else if ( keyPress == ESC )
{
std::unique_lock<std::mutex> lck(mtx);
quit = true;
cv.notify_all();
}
}
No, not safe. The waiting thread can get the mutex, check the predicate, sees nothing to wake up for. Then the signalling thread sets the bool, and signals. Next, the waiting thread blocks on the cv, and never awakens.
You must hold the mutex at some point between triggering the wakeup lambda condition, and notifying the cv, to avoid this.
The "down" case (turning off wakeup) I have not looked at, and it may depend on what behaviour exactly is ok. Without that specified in a formal sense I wouldn't do it either; in general, you should at least attempt sketches of formal proofs of correctness when fiddling with multi threaded code, or your code will be at best accidentally working.
If you can't do that, find someone who can to write that code for you.

Using a single Condition Variable to pause multiple threads

I have a program that starts N number of threads (async/future). I want the main thread to set up some data, then all threads should go while the main thread waits for all of the other threads to finish, and then this needs to loop.
What I have atm is something like this
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
bRun = true;
}
run.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bDone; });
}
//Reset bools
bRun = false;
bDone = false;
}
//Get results from futures once complete
}
int thread()
{
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bRun; });
bDone = true;
//Do thread stuff here
lock.unlock();
run.notify_all();
}
}
But I can't see any signs of either the main or the other threads waiting for each other! Any idea what I am doing wrong or how I can do this?
There are a couple of problems. First, you're setting bDone as soon as the first worker wakes up. Thus the main thread wakes immediately and begins readying the next data set. You want to have the main thread wait until all workers have finished processing their data. Second, when a worker finishes processing, it loops around and immediately checks bRun. But it can't tell if bRun == true means that the next data set is ready or if the last data set is ready. You want to wait for the next data set.
Something like this should work:
std::mutex mrun;
std::condition_variable dataReady;
std::condition_variable workComplete;
int nCurrentIteration = 0;
int nWorkerCount = 0;
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
nWorkerCount = N;
++nCurrentIteration;
}
dataReady.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
workComplete.wait(lock, [] { return nWorkerCount == 0; });
}
}
//Get results from futures once complete
}
int thread()
{
int nNextIteration == 1;
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
dataReady.wait(lock, [&nNextIteration] { return nCurrentIteration==nNextIteration; });
lock.unlock();
++nNextIteration;
//Do thread stuff here
lock.lock();
if (--nWorkerCount == 0)
{
lock.unlock();
workComplete.notify_one();
}
}
}
Be aware that this solution isn't quite complete. If a worker encounters an exception, then the main thread will hang (because the dead worker will never reduce nWorkerCount). You'll likely need a strategy to deal with that scenario.
Incidentally, this pattern is called a barrier.

Synchronizing three threads with Condition Variable

I have three threads in my application, the first thread needs to wait for a data to be ready from the two other threads. The two threads are preparing the data concurrently.
In order to do that I am using condition variable in C++ as following:
boost::mutex mut;
boost::condition_variable cond;
Thread1:
bool check_data_received()
{
return (data1_received && data2_received);
}
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
if (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
}
Thread2:
{
boost::lock_guard<boost::mutex> lock(mut);
data1_received = true;
}
cond.notify_one();
Thread3:
{
boost::lock_guard<boost::mutex> lock(mut);
data2_received = true;
}
cond.notify_one();
So my question is it correct to do that, or is there any more efficient way? I am looking for the most optimized way to do the waiting.
It looks like you want a semaphore here, so you can wait for two "resources" to be "taken".
For now, just replace the mutual exclusion with an atomic. you can still use a cv to signal the waiter:
#include <boost/thread.hpp>
boost::mutex mut;
boost::condition_variable cond;
boost::atomic_bool data1_received(false);
boost::atomic_bool data2_received(false);
bool check_data_received()
{
return (data1_received && data2_received);
}
void thread1()
{
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
while (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
std::cout << "." << std::flush;
}
}
void thread2()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data1_received = true;
cond.notify_one();
}
void thread3()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data2_received = true;
cond.notify_one();
}
int main()
{
boost::thread_group g;
g.create_thread(thread1);
g.create_thread(thread2);
g.create_thread(thread3);
g.join_all();
}
Note:
warning - it's essential that you know only the waiter is waiting on the cv, otherwise you need notify_all() instead of notify_one().
It is not important that the waiter is already waiting before the workers signal their completion, because the predicated timed_wait checks the predicate before blocking.
Because this sample uses atomics and predicated wait, it's not actually critical to signal the cv under the mutex. However, thread checkers will (rightly) complain about this (I think) because it's impossible for them to check proper synchronization unless you add the locking.