Shared lock with two exclusive lock groups - c++

I have two methods "log" and "measure" that should never execute at the same time.
So I tried to use a "std::mutex" to do this as follows:
void log(std::string message)
{
mtx.lock();
someLogFunctionality();
mtx.unlock();
}
void measure()
{
mtx.lock();
someMeasureFunctionality();
mtx.unlock();
}
Now it turned out that it also shall be possible to call "log" multiple times in parallel without locking and the same applies for "measure", too. (Reason: someLogFunctionality() and someMeasureFunctionality() interfere with each other but the same method may be called multiple times parallely)
I had a look at "std::shared_mutex" then, but there are two problems for me:
1.) With shared_mutex I could use lock_shared for only one of the methods (log or measure) but then the other one would have to use the exclusive lock (and could again not be executed multiple times in parallel)
void log(std::string message)
{
mtx.lock_shared();
someLogFunctionality();
mtx.unlock_shared();
}
void measure()
{
mtx.lock(); // This should also be shared but among another "group"
someMeasureFunctionality();
mtx.unlock();
}
2.) I can't use C++17 (constraint in the environment that I'm working with)
Do you have any suggestions for me how I could realize this?

Based on the reply from alexb I have written the following mutex class which currently works for me (only tried out in a simple multithreaded example application so far)
Please note that it is not protected against "starvation". In simple words: It is not ensured that that lockMeasure will ever get the lock if lockLogging is called high-frequently (and the other way round).
class MyMutex
{
private:
std::atomic<int> log_executors;
std::atomic<int> measure_executors;
std::mutex mtx;
std::condition_variable condition;
public:
MyMutex() : log_executors(0), measure_executors(0) {}
~MyMutex() {}
void lockMeasure()
{
std::unique_lock<std::mutex> lock(mtx);
while(log_executors) {
condition.wait(lock);
}
measure_executors++;
}
void unlockMeasure()
{
std::unique_lock<std::mutex> lock(mtx);
measure_executors--;
if (!measure_executors)
{
condition.notify_all();
}
}
void lockLogging()
{
std::unique_lock<std::mutex> lock(mtx);
while(measure_executors) {
condition.wait(lock);
}
log_executors++;
}
void unlockLogging()
{
std::unique_lock<std::mutex> lock(mtx);
log_executors--;
if (!log_executors)
{
condition.notify_all();
}
}
static MyMutex& getInstance()
{
static MyMutex _instance;
return _instance;
}
};
Usage:
void measure()
{
MyMutex::getInstance().lockMeasure();
someMeasureFunctionality();
MyMutex::getInstance().unlockMeasure();
}
void log()
{
MyMutex::getInstance().lockLogging();
someLogFunctionality();
MyMutex::getInstance().unlockLogging();
}

You need some barrier logic which is more complicated than shared_mutex (BTW, shared_mutex is not best choice for multiplatform compilation). For example, you can use mutex, conditional variable, and 2 variables for barrier sync. It does not take CPU and you may not use sleeps for check.
#include <mutex>
#include <condition_variable>
#include <atomic>
std::atomic<int> log_executors = 0;
std::atomic<int> measure_executors = 0;
std::mutex mutex;
std::condition_variable condition;
void log(std::string message) {
{
std::unique_lock<std::mutex> lock(mutex);
log_executors++; // Register current executor and prevent from entering new measure executors
// Wait until all measure executors will go away
while(measure_executors) {
condition.wait(lock); // wait condition variable signal. Mutex will be unlocked during wait
}
}
// here lock is freed
someLogFunctionality(); // execute logic
{
std::unique_lock<std::mutex> lock(mutex);
log_executors--; // unregister current execution
condition.notify_all(); // send signal and unlock all waiters
}
}
void measure()
{
{
std::unique_lock<std::mutex> lock(mutex);
measure_executors++; // Register current executor and prevent from entering new log executors
while(log_executors) {
condition.wait(lock); // wait until all measure executors will gone
}
}
someMeasureFunctionality();
{
std::unique_lock<std::mutex> lock(mutex);
measure_executors--; // unregister current execution
condition.notify_all(); // send signal and unlock all waiters
}
}

You can have a master lock granting access to a semaphore variable:
void log(std::string message)
{
acquire(LOG);
someLogFunctionality();
release(LOG);
}
void measure()
{
acquire(MEASURE);
someMeasureFunctionality();
release(MEASURE);
}
void acquire(int what) {
for (;;) {
mtx.lock();
if (owner == NONE) {
owner = what;
}
if (owner == what) {
// A LOG was asked while LOG is running
users[owner]++;
mtx.unlock();
return;
}
mtx.unlock();
// Some sleep would be good
usleep(5000);
}
}
void release(int what) {
mtx.lock();
if (owner != what) {
// This is an error. How could this happen?
}
if (users[what] <= 0) {
// This is an error. How could this happen?
}
users[what]--;
if (0 == users[what]) {
owner = NONE;
}
mtx.unlock();
}
In this case, for example:
owner is NONE
LOG1 acquires LOG. It can do so because owner is NONE
MEASURE1 acquires LOG. It starts spinning in place because owner != MEASURE
MEASURE2 acquires LOG. It starts spinning in place because owner != MEASURE
LOG2 acquires LOG. It can do so because owner is LOG, users[LOG]=2
LOG2 releases LOG. users[LOG]=1
LOG1 releases LOG. users[LOG]=0, so owner becomes NONE
MEASURE2 by pure chance acquires mtx before MEASURE1, finds owner=NONE and goes
MEASURE1 finds owner=MEASURE and sets users[MEASURE]=2
In the above, note that the second call to measure() actually executed a bit earlier. This should be OK. But if you want to keep the calls "serialized" even if they happen in parallel, you'll need a stack for each owner and more complex code.

Related

Handle mutex lock in callback c++

I've got a Timer class that can run with both an initial time and an interval. There's an internal function internalQuit performs thread.join() before a thread is started again on the resetCallback. The thing is that each public function has it's own std::lock_guard on the mutex to prevent the data of being written. I'm now running into an issue that when using the callback to for example stop the timer in the callback, the mutex cannot be locked by stop(). I'm hoping to get some help on how to tackle this issue.
class Timer
{
public:
Timer(string_view identifier, Function &&timeoutHandler, Duration initTime, Duration intervalTime);
void start()
void stop() // for example
{
std::lock_guard lock{mutex};
running = false;
sleepCv.notify_all();
}
void setInitTime()
void setIntervalTime()
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
startTimerThread(std::forward<Function>(timeoutHandler));
}
private:
internalQuit() // performs thread join
{
{
std::lock_guard lock {mutex};
quit = true;
running = false;
sleepCv.notify_all();
}
thread.join();
}
mainLoop(Function &&timeoutHandler)
{
while(!quit)
{
std::unique_lock lock{mutex};
// wait for running with sleepCv.wait()
// handle initTimer with sleepCv.wait_until()
timeoutHandler(); // callback
// handle intervalTimer with sleepCv.wait_until()
timeoutHandler(); // callback
}
}
startTimerThread(Function &&timeoutHandler)
{
thread = std::thread([&, timeoutHandler = std::forward<Function>(timeoutHandler)](){
mainLoop(timeoutHandler);
});
}
std::thread thread{};
std::mutex mutex{};
std::condition_variable sleepCv{}
// initTime, intervalTime and some booleans for updating with sleepCv.notify_all();
}
For testing this, I have the following testcase in Gtest. I'm expecting the timer to stop in the callback. Unfortunately, the timer will hang on acquiring the mutex lock in the stop() function.
std::atomic<int> callbackCounter;
void timerCallback()
{
callbackCounter.fetch_add(1, std::memory_order_acq_rel);
}
TEST(timerTest, timerShouldStopWhenStoppedInNewCallback)
{
std::atomic<int> testCounter{0};
Timer<std::chrono::steady_clock > t{"timerstop", &timerCallback, std::chrono::milliseconds(0), std::chrono::milliseconds(100)};
t.resetCallback([&]{
testCounter += 1;
t.stop();
});
t.start();
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // trigger due to original interval timeout
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // no trigger, because stopped in new callback
}
Removing all the mutexes in each of the public fucntions, fixes the issue. But that could lead to possible race conditions for data being written to variables. Hence each function has a lock before writing to f.e. the booleans.
I've tried looking into the std::move functionality to move the thread during the resetCallback into a different variable and then call join on that one. I'm also investigating recursive_mutex but have no experience with using that.
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
auto prevThread = std::thread(std::move(this->thread));
// didn't know how to continue from here, requiring more selfstudy.
startTimerThread(std::forward<Function>(timeoutHandler));
}
It's a new subject for me, have worked with mutexes and timers before but with relatively simple stuff.
Thank you in advance.

Waking up a thread waiting on a condition in infinite loop

I have a pretty basic producer / consumer implementation. The producer is the "main" thread, and the consumer is executed on a separate thread. However the consumer needs to be explicitly started, using a Start() function. This sets the "processing" flag to true (used in the infinite while loop).
Once in the while loop, the consumer then uses a condition variable to see if there is data in the queue to process. If yes, it does its work, goes back to the top of the infinite loop, then the condition variable, and so on.
The problem I am having is the consumer is waiting for data in the queue, and I want to stop processing. How can I wake up the consumer? I have provided some example code below, removing some major components, just showing the high level design (everything is not actually public).
// Consumer object
class Consumer {
public:
std::mutex mtx_;
bool processing_ = false;
std::thread processing_thread_;
std::queue<int> data_;
std::condition_variable cv_;
~Consumer() {
// Make sure the processing thread is stopped
{
std::lock_guard<std::mutex> lock(mtx_);
processing_ = false;
}
if (processing_thread_.joinable()) {
processing_thread_.join();
}
}
void Start() {
std::lock_guard<std::mutex> lock(mtx_);
processing_ = true;
processing_thread_ = std::thread(
&Consumer::Run,
this);
}
void Stop() {
std::lock_guard<std::mutex> lock(mtx_);
processing_ = false;
}
void AddData(int d) {
std::lock_guard<std::mutex> lock(mtx_);
data_.push(d);
cv_.notify_one();
}
bool IsDataAvailable() const {
return (!data.empty());
}
void Run() {
// The infinite loop
while (processing_) {
// This is where I get stuck waiting even tho processing has been
// changed to false by the main thread
std::unique_lock<std::mutex> lock(mtx_);
cv_.wait(lock, std::bind(
&Consumer::IsDataAvailable, this));
// do some processing
}
}
}; // end of consumer
// Somewhere in main trying to stop the processing thread cause I am
// done processing OR my_consumer goes out of scope and tries to join
// ...
my_consumer.Stop();
}
// my_consumer goes out of scope here calling destructor.
A couple of changes is required for the consumer to wait for change in processing_:
~Consumer() {
if (processing_thread_.joinable()) {
Stop();
processing_thread_.join();
}
}
// ...
void Stop() {
std::lock_guard<std::mutex> lock(mtx_);
processing_ = false;
cv_.notify_one();
}
// ...
void Run() {
for(;;) {
std::unique_lock<std::mutex> lock(mtx_);
// Wait till something is put into the queue or stop requested.
cv_.wait(lock, [this]() { return !processing_ || !data_.empty(); });
if(!data_.empty())
// Process queue elements.
else if(!processing_)
return; // Only exit when the queue is empty.
}
}

C++ Lock a mutex as if from another thread?

I'm writing an Audio class that holds an std::thread for refilling some buffers asynchronously. Say we call the main thread A and the background (class member) thread B. I'm using an std::mutex to block thread B whenever the sound is not playing, that way it doesn't run in the background when unnecessary and doesn't use excess CPU power. The mutex locked by thread A by default, so thread B is blocked, then when it's time to play the sound thread A unlocks the mutex and thread B runs (by locking then immediately unlocking it) in a loop.
The issue comes up when thread B sees that it's reached the end of the file. It can stop playback and clean up buffers and such, but it can't stop its own loop because thread B can't lock the mutex from thread A.
Here's the relevant code outline:
class Audio {
private:
// ...
std::thread Thread;
std::mutex PauseMutex; // mutex that blocks Thread, locked in constructor
void ThreadFunc(); // assigned to Thread in constructor
public:
// ...
void Play();
void Stop();
}
_
void Audio::ThreadFunc() {
// ... (include initial check of mutex here)
while (!this->EndThread) { // Thread-safe flag, only set when Audio is destructed
// ... Check and refill buffers as necessary, etc ...
if (EOF)
Stop();
// Attempt a lock, blocks thread if sound/music is not playing
this->PauseMutex.lock();
this->PauseMutex.unlock();
}
}
void Audio::Play() {
// ...
PauseMutex.unlock(); // unlock mutex so loop in ThreadFunc can start
}
void Audio::Stop() {
// ...
PauseMutex.lock(); // locks mutex to stop loop in ThreadFunc
// ^^ This is the issue here
}
In the above setup, when the background thread sees that it's reached EOF, it would call the class's Stop() function, which supposedly locks the mutex to stop the background thread. This doesn't work because the mutex would have to be locked by the main thread, not the background thread (in this example, it crashes in ThreadFunc because the background thread attempts a lock in its main loop after already locking in Stop()).
At this point the only thing I could think of would be to somehow have the background thread lock the mutex as if it was the main thread, giving the main thread ownership of the mutex... if that's even possible? Is there a way for a thread to transfer ownership of a mutex to another thread? Or is this a design flaw in the setup I've created? (If the latter, are there any rational workarounds?) Everything else in the class so far works just as designed.
I'm not going to even pretend to understand how your code is trying to do what it is doing. There is one thing, however, that is evident. You're trying to use a mutex for conveying some predicate state change, which is the wrong vehicle to drive on that freeway.
Predicate state change is handled by coupling three things:
Some predicate datum
A mutex to protect the predicate
A condition variable to convey possible change in predicate state.
The Goal
The goal in the below example is to demonstrate how a mutex, a condition variable, and predicate data are used in concert when controlling program flow across multiple threads. It shows examples of using both wait and wait_for condition variable functionality, as well as one way to run a member function as a thread proc.
Following is a simple Player class toggles between four possible states:
Stopped : The player is not playing, nor paused, nor quitting.
Playing : The player is playing
Paused : The player is paused, and will continue from whence it left off once it resumes Playing.
Quit : The player should stop what it is doing and terminate.
The predicate data is fairly obvious. the state member. It must be protected, which means it cannot be changed nor checked unless under the protection of the mutex. I've added to this a counter that simply increments during the course of maintaining the Playing state for some period of time. more specifically:
While Playing, each 200ms the counter increments, then dumps some data to the console.
While Paused, counter is not changed, but retains its last value while Playing. This means when resumed it will continue from where it left off.
When Stopped, the counter is reset to zero and a newline is injected into the console output. This means switching back to Playing will start the counter sequence all over again.
Setting the Quit state has no effect on counter, it will be going away along with everything else.
The Code
#include <iostream>
#include <mutex>
#include <condition_variable>
#include <thread>
#include <unistd.h>
using namespace std::chrono_literals;
struct Player
{
private:
std::mutex mtx;
std::condition_variable cv;
std::thread thr;
enum State
{
Stopped,
Paused,
Playing,
Quit
};
State state;
int counter;
void signal_state(State st)
{
std::unique_lock<std::mutex> lock(mtx);
if (st != state)
{
state = st;
cv.notify_one();
}
}
// main player monitor
void monitor()
{
std::unique_lock<std::mutex> lock(mtx);
bool bQuit = false;
while (!bQuit)
{
switch (state)
{
case Playing:
std::cout << ++counter << '.';
cv.wait_for(lock, 200ms, [this](){ return state != Playing; });
break;
case Stopped:
cv.wait(lock, [this]() { return state != Stopped; });
std::cout << '\n';
counter = 0;
break;
case Paused:
cv.wait(lock, [this]() { return state != Paused; });
break;
case Quit:
bQuit = true;
break;
}
}
}
public:
Player()
: state(Stopped)
, counter(0)
{
thr = std::thread(std::bind(&Player::monitor, this));
}
~Player()
{
quit();
thr.join();
}
void stop() { signal_state(Stopped); }
void play() { signal_state(Playing); }
void pause() { signal_state(Paused); }
void quit() { signal_state(Quit); }
};
int main()
{
Player player;
player.play();
sleep(3);
player.pause();
sleep(3);
player.play();
sleep(3);
player.stop();
sleep(3);
player.play();
sleep(3);
}
Output
I can't really demonstrate this. You'll have to run it and see how it works, and I invite you to toy with the states in main() as I have above. Do note, however, that once quit is invoked none of the other stated will be monitored. Setting the Quit state will shut down the monitor thread. For what its worth, a run of the above should look something like this:
1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.24.25.26.27.28.29.30.
1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.
with the first set of numbers dumped in two groups (1..15, then 16..30), as a result of playing, then pausing, then playing again. Then a stop is issued, followed by another play for a period of ~3 seconds. After that, the object self-destructs, and in doing so, sets the Quit state, and waits for the monitor to terminate.
Summary
Hopefully you get something out of this. If you find yourself trying to manage predicate state by manually latching and releasing mutexes, changes are you need a condition-variable design patter to facilitate detecting those changes.
Hope you get something out of it.
class CtLockCS
{
public:
//--------------------------------------------------------------------------
CtLockCS() { ::InitializeCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
~CtLockCS() { ::DeleteCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
bool TryLock() { return ::TryEnterCriticalSection(&m_cs) == TRUE; }
//--------------------------------------------------------------------------
void Lock() { ::EnterCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
void Unlock() { ::LeaveCriticalSection(&m_cs); }
//--------------------------------------------------------------------------
protected:
CRITICAL_SECTION m_cs;
};
///////////////////////////////////////////////////////////////////////////////
// class CtLockMX - using mutex
class CtLockMX
{
public:
//--------------------------------------------------------------------------
CtLockMX(const TCHAR* nameMutex = 0)
{ m_mx = ::CreateMutex(0, FALSE, nameMutex); }
//--------------------------------------------------------------------------
~CtLockMX()
{ if (m_mx) { ::CloseHandle(m_mx); m_mx = NULL; } }
//--------------------------------------------------------------------------
bool TryLock()
{ return m_mx ? (::WaitForSingleObject(m_mx, 0) == WAIT_OBJECT_0) : false; }
//--------------------------------------------------------------------------
void Lock()
{ if (m_mx) { ::WaitForSingleObject(m_mx, INFINITE); } }
//--------------------------------------------------------------------------
void Unlock()
{ if (m_mx) { ::ReleaseMutex(m_mx); } }
//--------------------------------------------------------------------------
protected:
HANDLE m_mx;
};
///////////////////////////////////////////////////////////////////////////////
// class CtLockSM - using semaphore
class CtLockSM
{
public:
//--------------------------------------------------------------------------
CtLockSM(int maxcnt) { m_sm = ::CreateSemaphore(0, maxcnt, maxcnt, 0); }
//--------------------------------------------------------------------------
~CtLockSM() { ::CloseHandle(m_sm); }
//--------------------------------------------------------------------------
bool TryLock() { return m_sm ? (::WaitForSingleObject(m_sm, 0) == WAIT_OBJECT_0) : false; }
//--------------------------------------------------------------------------
void Lock() { if (m_sm) { ::WaitForSingleObject(m_sm, INFINITE); } }
//--------------------------------------------------------------------------
void Unlock()
{
if (m_sm){
LONG prevcnt = 0;
::ReleaseSemaphore(m_sm, 1, &prevcnt);
}
}
//--------------------------------------------------------------------------
protected:
HANDLE m_sm;
};

conditional_variable does not trigger when using array of std::mutex

This application is recursive multi-thread detached one. Each thread regenerate
new bunch of threads before it dies.
Option 1 (works) however it's a shared resource hence slows the application down.
Option 2 should remove this bottleneck.
Option 1 works:
std::condition_variable cv;
bool ready = false;
std::mutex mu;
// go triggers the thread's function
void go() {
std::unique_lock<std::mutex> lck( mu );
ready = true;
cv.notify_all();
}
void ThreadFunc ( ...) {
std::unique_lock<std::mutex> lck ( mu );
cv.wait(lck, []{return ready;});
do something useful
}
Option 2 does NOT trigger the thread:
std::array<std::mutex, DUToutputs*MaxGnodes> arrMutex ;
void go ( long m , long Channel )
{
std::unique_lock<std::mutex> lck( arrMutex[m+MaxGnodes*Channel] );
ready = true;
cv.notify_all();
}
void ThreadFunc ( ...) {
std::unique_lock<std::mutex> lck ( arrMutex[Inst+MaxGnodes*Channel] );
while (!ready) cv.wait(lck);
do something useful
}
How can I make option #2 work?
The code in Option 2 contains a so-called data race on the variable ready, because the read and write operations on this variable are no longer synchronized. The behaviour of programs with data races is undefined. You can remove the data race by changing bool ready to std::atomic<bool> ready.
That should already fix the problem in Option 2. However, if you use std::atomic, you can also make other optimizations:
std::atomic<bool> ready{false};
void go(long m, long Channel) {
// no lock required
ready = true;
cv.notify_all();
}
void ThreadFunc( ...) {
std::unique_lock<std::mutex> lck(arrMutex[Inst+MaxGnodes*Channel]);
cv.wait(lck, [] { return ready; });
// do something useful
}

Synchronizing three threads with Condition Variable

I have three threads in my application, the first thread needs to wait for a data to be ready from the two other threads. The two threads are preparing the data concurrently.
In order to do that I am using condition variable in C++ as following:
boost::mutex mut;
boost::condition_variable cond;
Thread1:
bool check_data_received()
{
return (data1_received && data2_received);
}
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
if (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
}
Thread2:
{
boost::lock_guard<boost::mutex> lock(mut);
data1_received = true;
}
cond.notify_one();
Thread3:
{
boost::lock_guard<boost::mutex> lock(mut);
data2_received = true;
}
cond.notify_one();
So my question is it correct to do that, or is there any more efficient way? I am looking for the most optimized way to do the waiting.
It looks like you want a semaphore here, so you can wait for two "resources" to be "taken".
For now, just replace the mutual exclusion with an atomic. you can still use a cv to signal the waiter:
#include <boost/thread.hpp>
boost::mutex mut;
boost::condition_variable cond;
boost::atomic_bool data1_received(false);
boost::atomic_bool data2_received(false);
bool check_data_received()
{
return (data1_received && data2_received);
}
void thread1()
{
// Wait until socket data has arrived
boost::unique_lock<boost::mutex> lock(mut);
while (!cond.timed_wait(lock, boost::posix_time::milliseconds(200),
boost::bind(&check_data_received)))
{
std::cout << "." << std::flush;
}
}
void thread2()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data1_received = true;
cond.notify_one();
}
void thread3()
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(rand() % 4000));
data2_received = true;
cond.notify_one();
}
int main()
{
boost::thread_group g;
g.create_thread(thread1);
g.create_thread(thread2);
g.create_thread(thread3);
g.join_all();
}
Note:
warning - it's essential that you know only the waiter is waiting on the cv, otherwise you need notify_all() instead of notify_one().
It is not important that the waiter is already waiting before the workers signal their completion, because the predicated timed_wait checks the predicate before blocking.
Because this sample uses atomics and predicated wait, it's not actually critical to signal the cv under the mutex. However, thread checkers will (rightly) complain about this (I think) because it's impossible for them to check proper synchronization unless you add the locking.