conditional_variable does not trigger when using array of std::mutex - c++

This application is recursive multi-thread detached one. Each thread regenerate
new bunch of threads before it dies.
Option 1 (works) however it's a shared resource hence slows the application down.
Option 2 should remove this bottleneck.
Option 1 works:
std::condition_variable cv;
bool ready = false;
std::mutex mu;
// go triggers the thread's function
void go() {
std::unique_lock<std::mutex> lck( mu );
ready = true;
cv.notify_all();
}
void ThreadFunc ( ...) {
std::unique_lock<std::mutex> lck ( mu );
cv.wait(lck, []{return ready;});
do something useful
}
Option 2 does NOT trigger the thread:
std::array<std::mutex, DUToutputs*MaxGnodes> arrMutex ;
void go ( long m , long Channel )
{
std::unique_lock<std::mutex> lck( arrMutex[m+MaxGnodes*Channel] );
ready = true;
cv.notify_all();
}
void ThreadFunc ( ...) {
std::unique_lock<std::mutex> lck ( arrMutex[Inst+MaxGnodes*Channel] );
while (!ready) cv.wait(lck);
do something useful
}
How can I make option #2 work?

The code in Option 2 contains a so-called data race on the variable ready, because the read and write operations on this variable are no longer synchronized. The behaviour of programs with data races is undefined. You can remove the data race by changing bool ready to std::atomic<bool> ready.
That should already fix the problem in Option 2. However, if you use std::atomic, you can also make other optimizations:
std::atomic<bool> ready{false};
void go(long m, long Channel) {
// no lock required
ready = true;
cv.notify_all();
}
void ThreadFunc( ...) {
std::unique_lock<std::mutex> lck(arrMutex[Inst+MaxGnodes*Channel]);
cv.wait(lck, [] { return ready; });
// do something useful
}

Related

Is it possible to run a thread, that executes a function in a loop, only when a condition is met, which is also checked in a loop?

I want to check in one thread A if a condition is met,
if the condition is true I want another thread B to execute my code, once that is done, I want thread B to wait until that condition is true again, then it executes the code again, and so on. There is enough time to execute all the code in thread B before the condition is false. Basically thread A runs at normal speed, thread B only runs when thread A tells it it can run. And I don't want to spawn a new thread B all the time, it shouldn't stop, it should just execute it's code and then wait until it's allowed to execute it's code again.
How can I do that? Below is what I have so far, but I don't how to run mainExecution() in this type of loop?
std::mutex m;
std::condition_variable cv_can_execute;
bool b_can_execute = false;
void mainExection() {
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
doSomethingElse();
}
void canExecute() {
std::unique_lock lk(m);
while (true) {
condition = canRun();
if (condition) {
b_can_execute = true;
cv_can_execute.notify_all();
}
else {
b_can_execute = false;
}
}
b_add_done = true;
cv_add_done.notify_all();
}
int main() {
std::thread canExec(canExecute);
std::thread mainExec(mainExection);
canExec.join();
mainExec.join();
}
In your code both threads immediately lock mutex m, so only one can run at a time.
That's why you don't see the behavior you expect.
You should only lock the mutex when you want to touch shared memory,in your case b_can_execute. The code should look something like this:
void mainExection() {
{
std::unique_lock lk(m);
cv_can_execute.wait(lk, [] { return b_can_execute; });
} // Here the lock is released so A can do work.
doSomethingElse();
}
void canExecute() {
// std::unique_lock lk(m); Remove this
while (true) {
condition = canRun();
if (condition) {
{
std::unique_lock lk(m); // Lock to change shred variable.
b_can_execute = true;
} // Unlock here, so B can run
// It's best to unlock before you notify, so that B doesn't wake just to block again.
cv_can_execute.notify_all();
}
else {
std::unique_lock lk(m);
b_can_execute = false;
}
}
{
std::unique_lock lk(m);
b_add_done = true;
}
cv_add_done.notify_all();
}
Now, in your case you only lock the mutex to synchronize on a bool. This is usually seen as overkill as the cost of lock and unlocking is relatively high. You could try to look at atomic variables which would replace your bool and allow the threads to synchronize without the use of the mutex.

condition_variable usage for signaling and waiting

If data race is not an issue, can I use std::condition_variable for starting (i.e., signaling) and stopping (i.e, wait) a thread for work?
For example:
std::atomic<bool> quit = false;
std::atomic<bool> work = false;
std::mutex mtx;
std::condition_variable cv;
// if work, then do computation, otherwise wait on work (or quit) to become true
// thread reads: work, quit
void thread1()
{
while ( !quit )
{
// limiting the scope of the mutex
{
std::unique_lock<std::mutex> lck(mtx);
// I want here is to wait on this lambda
cv.wait(lck, []{ return work || quit; });
}
if ( work )
{
// work can become false again while working.
// I want here is to complete the work
// then wait on the next iteration.
ComputeWork();
}
}
}
// work controller
// thread writes: work, quit
void thread2()
{
if ( keyPress == '1' )
{
// is it OK not to use a mutex here?
work = false;
}
else if ( keyPress == '2' )
{
// ... or here?
work = true;
cv.notify_all();
}
else if ( keyPress == ESC )
{
// ... or here?
quit = true;
cv.notify_all();
}
}
Update/Summary: not safe because of 'lost wakeup' scenario that Adam describes.
cv.wait(lck, predicate()); can be equivalently written as while(!predicate()){ cv.wait(lck); }.
To see the problem easier: while(!predicate()){ /*lost wakeup can occur here*/ cv.wait(lck); }
Can be fixed by putting any read/writes of predicate variables in the mutex scope:
void thread2()
{
if ( keyPress == '1' )
{
std::unique_lock<std::mutex> lck(mtx);
work = false;
}
else if ( keyPress == '2' )
{
std::unique_lock<std::mutex> lck(mtx);
work = true;
cv.notify_all();
}
else if ( keyPress == ESC )
{
std::unique_lock<std::mutex> lck(mtx);
quit = true;
cv.notify_all();
}
}
No, not safe. The waiting thread can get the mutex, check the predicate, sees nothing to wake up for. Then the signalling thread sets the bool, and signals. Next, the waiting thread blocks on the cv, and never awakens.
You must hold the mutex at some point between triggering the wakeup lambda condition, and notifying the cv, to avoid this.
The "down" case (turning off wakeup) I have not looked at, and it may depend on what behaviour exactly is ok. Without that specified in a formal sense I wouldn't do it either; in general, you should at least attempt sketches of formal proofs of correctness when fiddling with multi threaded code, or your code will be at best accidentally working.
If you can't do that, find someone who can to write that code for you.

Shared lock with two exclusive lock groups

I have two methods "log" and "measure" that should never execute at the same time.
So I tried to use a "std::mutex" to do this as follows:
void log(std::string message)
{
mtx.lock();
someLogFunctionality();
mtx.unlock();
}
void measure()
{
mtx.lock();
someMeasureFunctionality();
mtx.unlock();
}
Now it turned out that it also shall be possible to call "log" multiple times in parallel without locking and the same applies for "measure", too. (Reason: someLogFunctionality() and someMeasureFunctionality() interfere with each other but the same method may be called multiple times parallely)
I had a look at "std::shared_mutex" then, but there are two problems for me:
1.) With shared_mutex I could use lock_shared for only one of the methods (log or measure) but then the other one would have to use the exclusive lock (and could again not be executed multiple times in parallel)
void log(std::string message)
{
mtx.lock_shared();
someLogFunctionality();
mtx.unlock_shared();
}
void measure()
{
mtx.lock(); // This should also be shared but among another "group"
someMeasureFunctionality();
mtx.unlock();
}
2.) I can't use C++17 (constraint in the environment that I'm working with)
Do you have any suggestions for me how I could realize this?
Based on the reply from alexb I have written the following mutex class which currently works for me (only tried out in a simple multithreaded example application so far)
Please note that it is not protected against "starvation". In simple words: It is not ensured that that lockMeasure will ever get the lock if lockLogging is called high-frequently (and the other way round).
class MyMutex
{
private:
std::atomic<int> log_executors;
std::atomic<int> measure_executors;
std::mutex mtx;
std::condition_variable condition;
public:
MyMutex() : log_executors(0), measure_executors(0) {}
~MyMutex() {}
void lockMeasure()
{
std::unique_lock<std::mutex> lock(mtx);
while(log_executors) {
condition.wait(lock);
}
measure_executors++;
}
void unlockMeasure()
{
std::unique_lock<std::mutex> lock(mtx);
measure_executors--;
if (!measure_executors)
{
condition.notify_all();
}
}
void lockLogging()
{
std::unique_lock<std::mutex> lock(mtx);
while(measure_executors) {
condition.wait(lock);
}
log_executors++;
}
void unlockLogging()
{
std::unique_lock<std::mutex> lock(mtx);
log_executors--;
if (!log_executors)
{
condition.notify_all();
}
}
static MyMutex& getInstance()
{
static MyMutex _instance;
return _instance;
}
};
Usage:
void measure()
{
MyMutex::getInstance().lockMeasure();
someMeasureFunctionality();
MyMutex::getInstance().unlockMeasure();
}
void log()
{
MyMutex::getInstance().lockLogging();
someLogFunctionality();
MyMutex::getInstance().unlockLogging();
}
You need some barrier logic which is more complicated than shared_mutex (BTW, shared_mutex is not best choice for multiplatform compilation). For example, you can use mutex, conditional variable, and 2 variables for barrier sync. It does not take CPU and you may not use sleeps for check.
#include <mutex>
#include <condition_variable>
#include <atomic>
std::atomic<int> log_executors = 0;
std::atomic<int> measure_executors = 0;
std::mutex mutex;
std::condition_variable condition;
void log(std::string message) {
{
std::unique_lock<std::mutex> lock(mutex);
log_executors++; // Register current executor and prevent from entering new measure executors
// Wait until all measure executors will go away
while(measure_executors) {
condition.wait(lock); // wait condition variable signal. Mutex will be unlocked during wait
}
}
// here lock is freed
someLogFunctionality(); // execute logic
{
std::unique_lock<std::mutex> lock(mutex);
log_executors--; // unregister current execution
condition.notify_all(); // send signal and unlock all waiters
}
}
void measure()
{
{
std::unique_lock<std::mutex> lock(mutex);
measure_executors++; // Register current executor and prevent from entering new log executors
while(log_executors) {
condition.wait(lock); // wait until all measure executors will gone
}
}
someMeasureFunctionality();
{
std::unique_lock<std::mutex> lock(mutex);
measure_executors--; // unregister current execution
condition.notify_all(); // send signal and unlock all waiters
}
}
You can have a master lock granting access to a semaphore variable:
void log(std::string message)
{
acquire(LOG);
someLogFunctionality();
release(LOG);
}
void measure()
{
acquire(MEASURE);
someMeasureFunctionality();
release(MEASURE);
}
void acquire(int what) {
for (;;) {
mtx.lock();
if (owner == NONE) {
owner = what;
}
if (owner == what) {
// A LOG was asked while LOG is running
users[owner]++;
mtx.unlock();
return;
}
mtx.unlock();
// Some sleep would be good
usleep(5000);
}
}
void release(int what) {
mtx.lock();
if (owner != what) {
// This is an error. How could this happen?
}
if (users[what] <= 0) {
// This is an error. How could this happen?
}
users[what]--;
if (0 == users[what]) {
owner = NONE;
}
mtx.unlock();
}
In this case, for example:
owner is NONE
LOG1 acquires LOG. It can do so because owner is NONE
MEASURE1 acquires LOG. It starts spinning in place because owner != MEASURE
MEASURE2 acquires LOG. It starts spinning in place because owner != MEASURE
LOG2 acquires LOG. It can do so because owner is LOG, users[LOG]=2
LOG2 releases LOG. users[LOG]=1
LOG1 releases LOG. users[LOG]=0, so owner becomes NONE
MEASURE2 by pure chance acquires mtx before MEASURE1, finds owner=NONE and goes
MEASURE1 finds owner=MEASURE and sets users[MEASURE]=2
In the above, note that the second call to measure() actually executed a bit earlier. This should be OK. But if you want to keep the calls "serialized" even if they happen in parallel, you'll need a stack for each owner and more complex code.

how to notify condition variable in another class, c++

I have a groups of objects, each object has two threads: Task thread processes the data and notifies Decision thread that the data is ready, then waits for Decision thread to make the decision whether to continue operations; Decision thread waits Task thread for the data, then consumes the data and make a decision ( notify Task thread that the decision is ready to fetch ).
Task.cpp:
class Task{
public:
void DoTask(){
// process data
{
std::unique_lock<std::mutex> lck(mtx);
data_ready = true;
cv_data.notify_one();
while( decision_ready == false )
cv_decision.wait( lck );
}
if ( decision )
// continue task
else
// quit
}
void SetDecision( bool flag ) { decision = flag; }
bool GetDataFlag() const { return data_ready; }
bool SetDecisionFlag( bool flag ) { decision_ready = flag; }
std::mutex mtx;
std::condition_variable cv_data;
std::condition_variable cv_decision;
private:
bool decision;
bool data_ready;
bool decision_ready;
};
main.cpp:
void Decision ( Task *task );
int main(){
Task mytask[10];
std::thread do[10];
std::thread decision[10];
for(int i=0; i< 10; ++i)
{
do[i] = std::thread( &Task::doTask, &mytask[i] );
decision[i] = std::thread( Decision, &mytask[i] );
do[i].detach();
decision[i].detach();
}
}
void Decision( Task *task )
{
st::mutex mtx_decision;
std::unique_lock<std::mutex> lck( task->mtx );
while( task->GetDataFlag() == false )
task->cv_data.wait(lck);
std::lock_guard<std::mutex> lk(mtx_decision);
// check database and make decision
task->SetDecision( true );
task->SetDecisionFlag( true );
task->cv_decision.notify_one();
}
What is the problem with this approach? The program works well only in single thread case. If I actually open two or more threads, I get segmentation fault. I am not sure how to pass the condition variables between different scopes. And hope someone can tell me the right way to do it. Thanks.
I suppose you need the same mutex and same conditional variable to get it working. Now each class gets own mutex and condition_variable and each decision too.
The most likely reason while your application crashes is because you detach your threads and than your main() exits, killing threads in the midst of what they are doing. I strongly advice against using detached threads.

Synchronizing three threads with Condition Variable

I have three threads in my application, the first thread needs to wait for a data to be ready from the two other threads. The two threads are preparing the data concurrently.
In order to do that I am using condition variable in C++ as following:
boost::mutex mut;
boost::condition_variable cond;
Thread1:
bool check_data_received()
{
return (data1_received && data2_received);
}
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
if (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
}
Thread2:
{
boost::lock_guard<boost::mutex> lock(mut);
data1_received = true;
}
cond.notify_one();
Thread3:
{
boost::lock_guard<boost::mutex> lock(mut);
data2_received = true;
}
cond.notify_one();
So my question is it correct to do that, or is there any more efficient way? I am looking for the most optimized way to do the waiting.
It looks like you want a semaphore here, so you can wait for two "resources" to be "taken".
For now, just replace the mutual exclusion with an atomic. you can still use a cv to signal the waiter:
#include <boost/thread.hpp>
boost::mutex mut;
boost::condition_variable cond;
boost::atomic_bool data1_received(false);
boost::atomic_bool data2_received(false);
bool check_data_received()
{
return (data1_received && data2_received);
}
void thread1()
{
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
while (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
std::cout << "." << std::flush;
}
}
void thread2()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data1_received = true;
cond.notify_one();
}
void thread3()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data2_received = true;
cond.notify_one();
}
int main()
{
boost::thread_group g;
g.create_thread(thread1);
g.create_thread(thread2);
g.create_thread(thread3);
g.join_all();
}
Note:
warning - it's essential that you know only the waiter is waiting on the cv, otherwise you need notify_all() instead of notify_one().
It is not important that the waiter is already waiting before the workers signal their completion, because the predicated timed_wait checks the predicate before blocking.
Because this sample uses atomics and predicated wait, it's not actually critical to signal the cv under the mutex. However, thread checkers will (rightly) complain about this (I think) because it's impossible for them to check proper synchronization unless you add the locking.