Using a single Condition Variable to pause multiple threads - c++

I have a program that starts N number of threads (async/future). I want the main thread to set up some data, then all threads should go while the main thread waits for all of the other threads to finish, and then this needs to loop.
What I have atm is something like this
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
bRun = true;
}
run.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bDone; });
}
//Reset bools
bRun = false;
bDone = false;
}
//Get results from futures once complete
}
int thread()
{
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bRun; });
bDone = true;
//Do thread stuff here
lock.unlock();
run.notify_all();
}
}
But I can't see any signs of either the main or the other threads waiting for each other! Any idea what I am doing wrong or how I can do this?

There are a couple of problems. First, you're setting bDone as soon as the first worker wakes up. Thus the main thread wakes immediately and begins readying the next data set. You want to have the main thread wait until all workers have finished processing their data. Second, when a worker finishes processing, it loops around and immediately checks bRun. But it can't tell if bRun == true means that the next data set is ready or if the last data set is ready. You want to wait for the next data set.
Something like this should work:
std::mutex mrun;
std::condition_variable dataReady;
std::condition_variable workComplete;
int nCurrentIteration = 0;
int nWorkerCount = 0;
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
nWorkerCount = N;
++nCurrentIteration;
}
dataReady.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
workComplete.wait(lock, [] { return nWorkerCount == 0; });
}
}
//Get results from futures once complete
}
int thread()
{
int nNextIteration == 1;
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
dataReady.wait(lock, [&nNextIteration] { return nCurrentIteration==nNextIteration; });
lock.unlock();
++nNextIteration;
//Do thread stuff here
lock.lock();
if (--nWorkerCount == 0)
{
lock.unlock();
workComplete.notify_one();
}
}
}
Be aware that this solution isn't quite complete. If a worker encounters an exception, then the main thread will hang (because the dead worker will never reduce nWorkerCount). You'll likely need a strategy to deal with that scenario.
Incidentally, this pattern is called a barrier.

Related

Handle mutex lock in callback c++

I've got a Timer class that can run with both an initial time and an interval. There's an internal function internalQuit performs thread.join() before a thread is started again on the resetCallback. The thing is that each public function has it's own std::lock_guard on the mutex to prevent the data of being written. I'm now running into an issue that when using the callback to for example stop the timer in the callback, the mutex cannot be locked by stop(). I'm hoping to get some help on how to tackle this issue.
class Timer
{
public:
Timer(string_view identifier, Function &&timeoutHandler, Duration initTime, Duration intervalTime);
void start()
void stop() // for example
{
std::lock_guard lock{mutex};
running = false;
sleepCv.notify_all();
}
void setInitTime()
void setIntervalTime()
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
startTimerThread(std::forward<Function>(timeoutHandler));
}
private:
internalQuit() // performs thread join
{
{
std::lock_guard lock {mutex};
quit = true;
running = false;
sleepCv.notify_all();
}
thread.join();
}
mainLoop(Function &&timeoutHandler)
{
while(!quit)
{
std::unique_lock lock{mutex};
// wait for running with sleepCv.wait()
// handle initTimer with sleepCv.wait_until()
timeoutHandler(); // callback
// handle intervalTimer with sleepCv.wait_until()
timeoutHandler(); // callback
}
}
startTimerThread(Function &&timeoutHandler)
{
thread = std::thread([&, timeoutHandler = std::forward<Function>(timeoutHandler)](){
mainLoop(timeoutHandler);
});
}
std::thread thread{};
std::mutex mutex{};
std::condition_variable sleepCv{}
// initTime, intervalTime and some booleans for updating with sleepCv.notify_all();
}
For testing this, I have the following testcase in Gtest. I'm expecting the timer to stop in the callback. Unfortunately, the timer will hang on acquiring the mutex lock in the stop() function.
std::atomic<int> callbackCounter;
void timerCallback()
{
callbackCounter.fetch_add(1, std::memory_order_acq_rel);
}
TEST(timerTest, timerShouldStopWhenStoppedInNewCallback)
{
std::atomic<int> testCounter{0};
Timer<std::chrono::steady_clock > t{"timerstop", &timerCallback, std::chrono::milliseconds(0), std::chrono::milliseconds(100)};
t.resetCallback([&]{
testCounter += 1;
t.stop();
});
t.start();
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // trigger due to original interval timeout
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // no trigger, because stopped in new callback
}
Removing all the mutexes in each of the public fucntions, fixes the issue. But that could lead to possible race conditions for data being written to variables. Hence each function has a lock before writing to f.e. the booleans.
I've tried looking into the std::move functionality to move the thread during the resetCallback into a different variable and then call join on that one. I'm also investigating recursive_mutex but have no experience with using that.
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
auto prevThread = std::thread(std::move(this->thread));
// didn't know how to continue from here, requiring more selfstudy.
startTimerThread(std::forward<Function>(timeoutHandler));
}
It's a new subject for me, have worked with mutexes and timers before but with relatively simple stuff.
Thank you in advance.

Is it possible to run a thread, that executes a function in a loop, only when a condition is met, which is also checked in a loop?

I want to check in one thread A if a condition is met,
if the condition is true I want another thread B to execute my code, once that is done, I want thread B to wait until that condition is true again, then it executes the code again, and so on. There is enough time to execute all the code in thread B before the condition is false. Basically thread A runs at normal speed, thread B only runs when thread A tells it it can run. And I don't want to spawn a new thread B all the time, it shouldn't stop, it should just execute it's code and then wait until it's allowed to execute it's code again.
How can I do that? Below is what I have so far, but I don't how to run mainExecution() in this type of loop?
std::mutex m;
std::condition_variable cv_can_execute;
bool b_can_execute = false;
void mainExection() {
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
doSomethingElse();
}
void canExecute() {
std::unique_lock lk(m);
while (true) {
condition = canRun();
if (condition) {
b_can_execute = true;
cv_can_execute.notify_all();
}
else {
b_can_execute = false;
}
}
b_add_done = true;
cv_add_done.notify_all();
}
int main() {
std::thread canExec(canExecute);
std::thread mainExec(mainExection);
canExec.join();
mainExec.join();
}
In your code both threads immediately lock mutex m, so only one can run at a time.
That's why you don't see the behavior you expect.
You should only lock the mutex when you want to touch shared memory,in your case b_can_execute. The code should look something like this:
void mainExection() {
{
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
} // Here the lock is released so A can do work.
doSomethingElse();
}
void canExecute() {
// std::unique_lock lk(m); Remove this
while (true) {
condition = canRun();
if (condition) {
{
std::unique_lock lk(m); // Lock to change shred variable.
b_can_execute = true;
} // Unlock here, so B can run
// It's best to unlock before you notify, so that B doesn't wake just to block again.
cv_can_execute.notify_all();
}
else {
std::unique_lock lk(m);
b_can_execute = false;
}
}
{
std::unique_lock lk(m);
b_add_done = true;
}
cv_add_done.notify_all();
}
Now, in your case you only lock the mutex to synchronize on a bool. This is usually seen as overkill as the cost of lock and unlocking is relatively high. You could try to look at atomic variables which would replace your bool and allow the threads to synchronize without the use of the mutex.

C++/Qt: How to create a busyloop which you can put on pause?

Is there a better answer to this question than creating a spinlock-like structure with a global boolean flag which is checked in the loop?
bool isRunning = true;
void busyLoop()
{
for (;;) {
if (!isRunning)
continue;
// ...
}
}
int main()
{
// ...
QPushButton *startBusyLoopBtn = new QPushButton("start busy loop");
QObject::connect(startBusyLoopBtn, QPushButton::clicked, [](){ busyLoop(); });
QPushButton *startPauseBtn = new QPushButton("start/pause");
QObject::connect(startPauseBtn, QPushButton::clicked, [](){ isRunning = !isRunning; });
// ...
}
To begin with, we waste the CPU time while checking the flag. Secondly, we need two separate buttons for this scheme to work. How can we use Qt's slot-signal mechanism for a simpler solution?
You can use std::condition_variable:
std::mutex mtx;
std::condition_variable cv_start_stop;
std::thread thr([&](){
/**
* this thread will notify and unpause the main loop 3 seconds later
*/
std::this_thread::sleep_for(std::chrono::milliseconds(3000));
cv_start_stop.notify_all();
});
bool paused = true;
while (true)
{
if (paused)
{
std::unique_lock<std::mutex> lock(mtx);
cv_start_stop.wait(lock); // this will lock the thread until notified.
std::cout << "thread unpaused\n";
paused = false;
}
std::cout << "loop goes on until paused\n";
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
This will not brutally check for a flag to continue, instead, it will put thread to sleep until notified.
You will simply make paused = true; to pause and cv_start_stop.notify_one(); or cv_start_stop.notify_all(); to unpause.

Waking up a thread waiting on a condition in infinite loop

I have a pretty basic producer / consumer implementation. The producer is the "main" thread, and the consumer is executed on a separate thread. However the consumer needs to be explicitly started, using a Start() function. This sets the "processing" flag to true (used in the infinite while loop).
Once in the while loop, the consumer then uses a condition variable to see if there is data in the queue to process. If yes, it does its work, goes back to the top of the infinite loop, then the condition variable, and so on.
The problem I am having is the consumer is waiting for data in the queue, and I want to stop processing. How can I wake up the consumer? I have provided some example code below, removing some major components, just showing the high level design (everything is not actually public).
// Consumer object
class Consumer {
public:
std::mutex mtx_;
bool processing_ = false;
std::thread processing_thread_;
std::queue<int> data_;
std::condition_variable cv_;
~Consumer() {
// Make sure the processing thread is stopped
{
std::lock_guard<std::mutex> lock(mtx_);
processing_ = false;
}
if (processing_thread_.joinable()) {
processing_thread_.join();
}
}
void Start() {
std::lock_guard<std::mutex> lock(mtx_);
processing_ = true;
processing_thread_ = std::thread(
&Consumer::Run,
this);
}
void Stop() {
std::lock_guard<std::mutex> lock(mtx_);
processing_ = false;
}
void AddData(int d) {
std::lock_guard<std::mutex> lock(mtx_);
data_.push(d);
cv_.notify_one();
}
bool IsDataAvailable() const {
return (!data.empty());
}
void Run() {
// The infinite loop
while (processing_) {
// This is where I get stuck waiting even tho processing has been
// changed to false by the main thread
std::unique_lock<std::mutex> lock(mtx_);
cv_.wait(lock, std::bind(
&Consumer::IsDataAvailable, this));
// do some processing
}
}
}; // end of consumer
// Somewhere in main trying to stop the processing thread cause I am
// done processing OR my_consumer goes out of scope and tries to join
// ...
my_consumer.Stop();
}
// my_consumer goes out of scope here calling destructor.
A couple of changes is required for the consumer to wait for change in processing_:
~Consumer() {
if (processing_thread_.joinable()) {
Stop();
processing_thread_.join();
}
}
// ...
void Stop() {
std::lock_guard<std::mutex> lock(mtx_);
processing_ = false;
cv_.notify_one();
}
// ...
void Run() {
for(;;) {
std::unique_lock<std::mutex> lock(mtx_);
// Wait till something is put into the queue or stop requested.
cv_.wait(lock, [this]() { return !processing_ || !data_.empty(); });
if(!data_.empty())
// Process queue elements.
else if(!processing_)
return; // Only exit when the queue is empty.
}
}

Correct way to wait a condition variable that is notified by several threads

I'm trying to do this with the C++11 concurrency support.
I have a sort of thread pool of worker threads that all do the same thing, where a master thread has an array of condition variables (one for each thread, they need to 'start' synchronized, ie not run ahead one cycle of their loop).
for (auto &worker_cond : cond_arr) {
worker_cond.notify_one();
}
then this thread has to wait for a notification of each thread of the pool to restart its cycle again. Whats the correct way of doing this? Have a single condition variable and wait on some integer each thread that isn't the master is going to increase? something like (still in the master thread)
unique_lock<std::mutex> lock(workers_mtx);
workers_finished.wait(lock, [&workers] { return workers = cond_arr.size(); });
I see two options here:
Option 1: join()
Basically instead of using a condition variable to start the calculations in your threads, you spawn a new thread for every iteration and use join() to wait for it to be finished. Then you spawn new threads for the next iteration and so on.
Option 2: locks
You don't want the main-thread to notify as long as one of the threads is still working. So each thread gets its own lock, which it locks before doing the calculations and unlocks afterwards. Your main-thread locks all of them before calling the notify() and unlocks them afterwards.
I see nothing fundamentally wrong with your solution.
Guard workers with workers_mtx and done.
We could abstract this with a counting semaphore.
struct counting_semaphore {
std::unique_ptr<std::mutex> m=std::make_unique<std::mutex>();
std::ptrdiff_t count = 0;
std::unique_ptr<std::condition_variable> cv=std::make_unique<std::condition_variable>();
counting_semaphore( std::ptrdiff_t c=0 ):count(c) {}
counting_semaphore(counting_semaphore&&)=default;
void take(std::size_t n = 1) {
std::unique_lock<std::mutex> lock(*m);
cv->wait(lock, [&]{ if (count-std::ptrdiff_t(n) < 0) return false; count-=n; return true; } );
}
void give(std::size_t n = 1) {
{
std::unique_lock<std::mutex> lock(*m);
count += n;
if (count <= 0) return;
}
cv->notify_all();
}
};
take takes count away, and blocks if there is not enough.
give adds to count, and notifies if there is a positive amount.
Now the worker threads ferry tokens between two semaphores.
std::vector< counting_semaphore > m_worker_start{count};
counting_semaphore m_worker_done{0}; // not count, zero
std::atomic<bool> m_shutdown = false;
// master controller:
for (each step) {
for (auto&& starts:m_worker_start)
starts.give();
m_worker_done.take(count);
}
// master shutdown:
m_shutdown = true;
// wake up forever:
for (auto&& starts:m_worker_start)
starts.give(std::size_t(-1)/2);
// worker thread:
while (true) {
master->m_worker_start[my_id].take();
if (master->m_shutdown) return;
// do work
master->m_worker_done.give();
}
or somesuch.
live example.