How should I lock a wxMutex in one function and unlock it in another? - c++

I'm supposed to implement my own logging class for use in a program with two threads, the main one and a joinable processing thread. I don't want either thread to use the logger while the other is using it, so I figured I'd use a wxMutex. The logger has to act like a C++ ostream with operator<<, except that a stream manipulator function (std::endl, for example) will indicate the end of the logger's output. I think I need a recursive mutex so that the same thread can continue outputting when the mutex is locked.
The problem (I think) is that the mutex doesn't get fully unlocked, so when the other thread tries to output to the logger, it deadlocks.
Is there something in my code that I'm missing?
class Logger{
private:
std::ostream* out;
wxMutex* mut;
....
public:
template<class T>
Logger& operator<<(T str){ // accept any type of input and lock outputting until finished
if(out){
if(mut->TryLock() == wxMUTEX_BUSY){ // if we didn't get lock
mut->Lock(); // we block
}
if(out){
(*out) << str;
out->flush();
}
//mut->Unlock(); // leave locked until done outputting (use std::endl or similar)
}
return *this;
}
// accept stream manipulators and unlock stream output
Logger& operator<<(std::ostream& (*pf) (std::ostream&)){
if(out){
if(mut->TryLock() == wxMUTEX_BUSY){
mut->Lock();
}
(*out) << pf;
out->flush();
while(mut->Unlock()!= wxMUTEX_UNLOCKED);
}
return *this;
}
};

If you are worried about threading issues, you could instead make a macro that makes sure the mutex is acquired before the output and released after the output.
Something like:
#define LOG(logger, output) \
do { logger.lock(); logger << output; logger.unlock(); } while (0)
Logger my_logger;
int some_integer = 5;
LOG(my_logger, "Hello world!" << some_integer << std::endl);

First of all, in wxWidgets 2.9 wxLog itself is MT-safe and you can have independent log targets for each thread so perhaps you could just use it instead of writing your own.
Second, using TryLock() is suspicious, if you want to be able to re-lock the mutex already belonging to the current thread, you should use wxMUTEX_RECURSIVE when creating the mutex and simply use Lock() nevertheless. However personally I believe that using recursive mutexes is a bad idea because it makes your MT-code less clear and more difficult to reason about and this is almost invariably catastrophic.
Finally, the whole idea of relying on someone to call << endl is just wrong. It's too easy to forget to do it somewhere and leave the mutex locked preventing all the other threads from continuing. You absolutely should create a proxy object locking the mutex in its ctor and unlocking it in its dtor and use it to ensure that the mutex is always unlocked at the end of statement doing the logging. I think that by using this technique you should avoid the need for recursive mutexes too.

I might be wrong but it looks to me that you are trying to lock the mutex when it is in use already.
if(mut->TryLock() == wxMUTEX_BUSY) // if TryLock() returns wxMUTEX_BUSY another thread is using it.
{
mut->Lock(); // A dead lock situation could be detected here because Lock() also returns an error code.
}
TryLock() usually attempts to acquire the mutex without blocking. So, if the mutex is already busy on that call, it means that another thread is using it.
You could have a look to this link from wxMutex which explains how each functions works. Their function returns values, so you could use that to see what's going wrong in your program.
See the doc from their website below.
wxMutex::Lock
wxMutexError Lock()
Locks the mutex object.
Return value
One of:
wxMUTEX_NO_ERROR There was no error.
wxMUTEX_DEAD_LOCK A deadlock situation was detected.

Related

Why doesn't mutex work without lock guard?

I have the following code:
#include <chrono>
#include <iostream>
#include <mutex>
#include <thread>
int shared_var {0};
std::mutex shared_mutex;
void task_1()
{
while (true)
{
shared_mutex.lock();
const auto temp = shared_var;
std::this_thread::sleep_for(std::chrono::seconds(1));
if(temp == shared_var)
{
//do something
}
else
{
const auto timenow = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now());
std::cout << ctime(&timenow) << ": Data race at task_1: shared resource corrupt \n";
std::cout << "Actual value: " << shared_var << "Expected value: " << temp << "\n";
}
shared_mutex.unlock();
}
}
void task_2()
{
while (true)
{
std::this_thread::sleep_for(std::chrono::seconds(2));
++shared_var;
}
}
int main()
{
auto task_1_thread = std::thread(task_1);
auto task_2_thread = std::thread(task_2);
task_1_thread.join();
task_2_thread.join();
return 0;
}
shared_var is protected in task_1 but not protected in task_2
What is expected:
I was expecting else branch is not entered in task_1 as the shared resource is locked.
What actually happens:
Running this code will enter else branch in task_1.
Expected outcome is obtained when replace shared_mutex.lock(); with std::lock_guard<std::mutex> lock(shared_mutex); and shared_mutex.unlock(); with std::lock_guard<std::mutex> unlock(shared_mutex);
Questions:
What is the problem in my current approach?
Why does it work with loack_guard?
I am running the code on:
https://www.onlinegdb.com/online_c++_compiler
Suppose you have a room with two entries. One entry has a door the other not. The room is called shared_var. There are two guys that want to enter the room, they are called task_1 and task_2.
You now want to make sure somehow that only one of them is inside the room at any time.
taks_2 can enter the room freely through the entry without a door. task_1 uses the door called shared_mutex.
Your question is now: Can achieve that only one guy is in the room by adding a lock to the door at the first entry?
Obviously no, because the second door can still be entered and left without you having any control over it.
If you experiment you might observe that without the lock it happens that you find both guys in the room while after adding the lock you don't find both guys in the room. Though this is pure luck (bad luck actually, because it makes you beleive that the lock helped). In fact the lock did not change much. The guy called task_2 can still enter the room while the other guy is inside.
The solution would be to make both go through the same door. They lock the door when going inside and unlock it when leaving the room. Putting an automatic lock on the door can be nice, because then the guys cannot forget to unlock the door when they leave.
Oh sorry, i got lost in telling a story.
TL;DR: In your code it does not matter if you use the lock or not. Actually also the mutex in your code is useless, because only one thread un/locks it. To use the mutex properly, both threads need to lock it before reading/writing shared memory.
With UB (as data race), output is undetermined, you might see "expected" output, or strange stuff, crash, ...
What is the problem in my current approach?
In first sample, you have data race as you write (non-atomic) shared_var in one thread without synchronization and read in another thread.
Why does it work with loack_guard?
In modified sample, you lock twice the same (non-recursive) mutex, which is also UB
From std::mutex::lock:
If lock is called by a thread that already owns the mutex, the behavior is undefined
You just have 2 different behaviours for 2 different UB (when anything can happen for both cases).
A mutex lock does not lock a variable, it just locks the mutex so that other code cannot lock the same mutex at the same time.
In other words, all accesses to a shared variable need to be wrapped in a mutex lock on the same mutex to avoid multiple simultaneous accesses to the same variable, it's not in any way automatic just because the variable is wrapped in a mutex lock in another place in the code.
You're not locking the mutex at all in task2, so there is a race condition.
The reason it seems to work when you wrap the mutex in a std::lock_guard is that the lock guard holds the mutex lock until the end of the scope which in this case is the end of the function.
Your function first locks the mutex with the lock lock_guard to later in the same scope try to lock the same mutex with the unlock lock_guard. Since the mutex is already locked by the lock lock_guard, execution stops and there is no output because the program is in effect not running anymore.
If you output "ok" in your code at the point of the "//do something" comment, you'll see that you get the output once and then the program stops all output.
Note; as of this behaviour being guaranteed, see #Jarod42s answer for much better info on that. As with most unexpected behaviour in C++, there is probably an UB involved.

Swapping mutex locks

I'm having trouble with properly "swapping" locks. Consider this situation:
bool HidDevice::wait(const std::function<bool(const Info&)>& predicate)
{
/* A method scoped lock. */
std::unique_lock waitLock(this->waitMutex, std::defer_lock);
/* A scoped, general access, lock. */
{
std::lock_guard lock(this->mutex);
bool exitEarly = false;
/* do some checks... */
if (exitEarly)
return false;
/* Only one thread at a time can execute this method, however
other threads can execute other methods or abort this one. Thus,
general access mutex "this->mutex" should be unlocked (to allow threads
to call other methods) while at the same time, "this->waitMutex" should
be locked to prevent multiple executions of code below. */
waitLock.lock(); // How do I release "this->mutex" here?
}
/* do some stuff... */
/* The main problem is with this event based OS function. It can
only be called once with the data I provide, therefore I need to
have a 2 locks - one blocks multiple method calls (the usual stuff)
and "waitLock" makes sure that only one instance of "osBlockingFunction"
is ruinning at the time. Since this is a thread blocking function,
"this->mutex" must be unlocked at this point. */
bool result = osBlockingFunction(...);
/* In methods, such as "close", "this->waitMutex" and others are then used
to make sure that thread blocking methods have returned and I can safely
modify related data. */
/* do some more stuff... */
return result;
}
How could I solve this "swapping" problem without overly complicating code? I could unlock this->mutex before locking another, however I'm afraid that in that nanosecond, a race condition might occur.
Edit:
Imagine that 3 threads are calling wait method. The first one will lock this->mutex, then this->waitMutex and then will unlock this->mutex. The second one will lock this->mutex and will have to wait for this->waitMutex to be available. It will not unlock this->mutex. The third one will get stuck on locking this->mutex.
I would like to get the last 2 threads to wait for this->waitMutex to be available.
Edit 2:
Expanded example with osBlockingFunction.
It smells like that the design/implementation should be a bit different with std::condition_variable cv on the HidDevice::wait and only one mutex. And as you write "other threads can execute other methods or abort this one" will call cv.notify_one to "abort" this wait. The cv.wait {enter wait & unlocks the mutex} atomically and on cv.notify {exits wait and locks the mutex} atomically. Like that HidDevice::wait is more simple:
bool HidDevice::wait(const std::function<bool(const Info&)>& predicate)
{
std::unique_lock<std::mutex> lock(this->m_Mutex); // Only one mutex.
m_bEarlyExit = false;
this->cv.wait(lock, spurious wake-up check);
if (m_bEarlyExit) // A bool data-member for abort.
return;
/* do some stuff... */
}
My assumption is (according to the name of the function) that on /* do some checks... */ the thread waits until some logic comes true.
"Abort" the wait, will be in the responsibility of other HidDevice function, called by the other thread:
void HidDevice::do_some_checks() /* do some checks... */
{
if ( some checks )
{
if ( other checks )
m_bEarlyExit = true;
this->cv.notify_one();
}
}
Something similar to that.
I recommend creating a little "unlocker" facility. This is a mutex wrapper with inverted semantics. On lock it unlocks and vice-versa:
template <class Lock>
class unlocker
{
Lock& locked_;
public:
unlocker(Lock& lk) : locked_{lk} {}
void lock() {locked_.unlock();}
bool try_lock() {locked_.unlock(); return true;}
void unlock() {locked_.lock();}
};
Now in place of:
waitLock.lock(); // How do I release "this->mutex" here?
You can instead say:
unlocker temp{lock};
std::lock(waitLock, temp);
where lock is a unique_lock instead of a lock_guard holding mutex.
This will lock waitLock and unlock mutex as if by one uninterruptible instruction.
And now, after coding all of that, I can reason that it can be transformed into:
waitLock.lock();
lock.unlock(); // lock must be a unique_lock to do this
Whether the first version is more or less readable is a matter of opinion. The first version is easier to reason about (once one knows what std::lock does). But the second one is simpler. But with the second, the reader has to think more carefully about the correctness.
Update
Just read the edit in the question. This solution does not fix the problem in the edit: The second thread will block the third (and following threads) from making progress in any code that requires mutex but not waitMutex, until the first thread releases waitMutex.
So in this sense, my answer is technically correct, but does not satisfy the desired performance characteristics. I'll leave it up for informational purposes.

Query std::mutex for lock state

I have a situation where I'd like to do something like that shown below, but there doesn't seem to be a way of querying the mutex without changing its state. I don't want the someFunctionCalledRepeatedlyFromAnotherThread() to hang around waiting for the mutex to free up if it is locked. It must return immediately after performing some alternative action. I'm guessing this omission is there for safety, as the lock may free up in the time between querying it and the function returning. In my case, bad stuff will not happen if the lock is freed while doSomeAlternativeAction() is happening. The fact I'm in this situation probably means I'm doing something wrong, so how should I change my design?
class MyClass
{
std::mutex initMutex;
public:
void someInitializationFunction()
{
std::lock_guard<std::mutex> lock(initMutex);
// Do some work
}
void someFunctionCalledRepeatedlyFromAnotherThread()
{
if (initMutex.isLocked())
{
doSomeAlternativeAction();
return;
}
// otherwise . . .
std::lock_guard<std::mutex> lock(initMutex);
// carry on with work as usual
}
}
Asking mutex for its state is useless: it may be unlocked now, but by the time you get around to lock it, it may be locked. So it does not have such method.
It does, however, have a method try_lock() which locks it if it is not locked and returns true if it acquired the lock and false otherwise. And std::unique_lock (the more advanced version of std::lock_guard) has option to call it.
So you can do:
void someFunctionCalledRepeatedlyFromAnotherThread()
{
std::unique_lock<std::mutex> lock(initMutex, std::try_to_lock);
if(!lock.owns_lock())
{
doSomeAlternativeAction();
return;
}
// otherwise ... go ahead, you have the lock
}
Sounds like you want to use std::unique_lock instead of std::lock_guard. The try_lock method works similar to the TryEnterCriticalSection on Windows, whereby the function will atomically acquire the lock and return 'true' if it can, or just return 'false' if the lock cannot be acquired (and will not block). Refer to http://msdn.microsoft.com/en-us/library/hh921439.aspx and http://en.cppreference.com/w/cpp/thread/unique_lock/try_lock. Note that unique_lock also has other members available for trying to lock such as try_lock_for and try_lock_until.

How is it possible to lock a GMutex twice?

I have a test program that I wrote to try and debug a GMutex issue that I am having and I cannot seem to figure it out. I am using the class below to lock and unlock a mutex within a scoped context. This is similar to BOOST's guard.
/// #brief Helper class used to create a mutex.
///
/// This helper Mutex class will lock a mutex upon creation and unlock when deleted.
/// This class may also be referred to as a guard.
///
/// Therefore this class allows scoped access to the Mutex's locking and unlocking operations
/// and is good practice since it ensures that a Mutex is unlocked, even if an exception is thrown.
///
class cSessionMutex
{
GMutex* apMutex;
/// The object used for logging.
mutable cLog aLog;
public:
cSessionMutex (GMutex *ipMutex) : apMutex(ipMutex), aLog ("LOG", "->")
{
g_mutex_lock(apMutex);
aLog << cLog::msDebug << "MUTEX LOCK " << apMutex << "," << this << cLog::msEndL;
}
~cSessionMutex ()
{
aLog << cLog::msDebug << "MUTEX UNLOCK " << apMutex << "," << this << cLog::msEndL;
g_mutex_unlock(apMutex);
}
};
Using this class, I call it as follows:
bool van::cSessionManager::RegisterSession(const std::string &iSessionId)
{
cSessionMutex lRegistryLock (apRegistryLock);
// SOME CODE
}
where apRegistryLock is a member variable of type GMutex* and is initialized using g_mutex_new() before I ever call RegisterSession.
With this said, when I run the application with several threads, I sometimes notice at the beginning, when RegisterSession is called for the first few times that the log (from the constructor above)
[DEBUG] LOG.-> - MUTEX LOCK 0x26abb40,0x7fc14ad7ae10
[DEBUG] LOG.-> - MUTEX LOCK 0x26abb40,0x7fc14af7ce10
is logged twice in a row with the same mutex but different instance; therefore, suggesting that the mutex is being locked twice or the second lock is simply being ignored - which is seriously bad.
Moreover, it is worth noting that I also check to see if these logs were initiated from the same thread using the g_thread_self() function, and this returned two separate thread identifiers; thus, suggesting that the mutex was locked twice from separate threads.
So my question is, how is it possible for this to occur?
If it's called twice in the same call chain in the same thread this could happen. The second lock is typically (although not always) ignored. At least in pthreads it's possible to configure multiple locks as counted.
What was happening in my case was there was another thread that was calling g_cond_timed_wait function with the same mutex, but with the mutex unlocked. In this case, g_cond_timed_wait function unlocks a mutex that is not locked and leaves the mutex in an undefined state, which explains why I was seeing the behaviour explained in this question: the mutex being locked twice.

Acquire a lock on two mutexes and avoid deadlock

The following code contains a potential deadlock, but seems to be necessary: to safely copy data to one container from another, both containers must be locked to prevent changes from occurring in another thread.
void foo::copy(const foo & rhs)
{
pMutex->lock();
rhs.pMutex->lock();
// do copy
}
Foo has an STL container and "do copy" essentially consists of using std::copy. How do I lock both mutexes without introducing deadlock?
Impose some kind of total order on instances of foo and always acquire their locks in either increasing or decreasing order, e.g., foo1->lock() and then foo2->lock().
Another approach is to use functional semantics and instead write a foo::clone method that creates a new instance rather than clobbering an existing one.
If your code is doing lots of locking, you may need a complex deadlock-avoidance algorithm such as the banker's algorithm.
How about this?
void foo::copy(const foo & rhs)
{
scopedLock lock(rhs.pMutex); // release mutex in destructor
foo tmp(rhs);
swap(tmp); // no throw swap locked internally
}
This is exception safe, and pretty thread safe as well. To be 100% thread save you'll need to review all code path and than re-review again with another set of eyes, after that review it again...
As #Mellester mentioned you can use std::lock for locking multiple mutexes avoiding deadlock.
#include <mutex>
void foo::copy(const foo& rhs)
{
std::lock(pMutex, rhs.pMutex);
std::lock_guard<std::mutex> l1(pMutex, std::adopt_lock);
std::lock_guard<std::mutex> l2(rhs.pMutex, std::adopt_lock);
// do copy
}
But note to check that rhs is not a *this since in this case std::lock will lead to UB due to locking same mutex.
this is a known problem already there is a std solution.
std::lock() can be called on 2 or more mutex at the same time whilst avoiding deadlock's.
More information here
it does offer a recommendation.
std::scoped_lock offers a RAII wrapper for this function, and is
generally preferred to a naked call to std::lock.
of course this doesn't really allow early releases of one lock above the other so use std::defer_lock or std::adopt_lock like I did in this answer to a similar question.
To avoid a deadlock its probably best, to wait until both resources can be locked:
Dont know which mutex API you are using so here is some arbitrary pseudo code, assume that can_lock() only checks if it can lock a mutex, and that try_lock() returns true if it did lock, and false, if the mutex is already locked by somebody else.
void foo::copy(const foo & rhs)
{
for(;;)
{
if(! pMutex->cany_lock() || ! rhs.pMutex->cany_lock())
{
// Depending on your environment call or dont call sleep()
continue;
}
if(! pMutex->try_lock())
continue;
if(! rhs.pMutex->try_lock())
{
pMutex->try_lock()
continue;
}
break;
}
// do copy
}
You can try locking both the mutexes at the same time using scoped_lock or auto_lock.... like bank transfer do...
void Transfer(Receiver recv, Sender send)
{
scoped_lock rlock(recv.mutex);
scoper_lock slock(send.mutex);
//do transaction.
}