Shouldn't this code lead to a deadlock? - c++

I have a class which contains a mutex and an object, each time I need to access the contained object, a method is called to lock the mutex and return te contained object, let's see the code:
template <typename MUTEX, typename RESOURCE>
class LockedResource
{
using mutex_t = MUTEX;
using resource_t = RESOURCE;
mutex_t m_mutex;
resource_t m_resource;
public:
template <typename ... ARGS>
LockedResource(ARGS &&... args) :
m_resource(std::forward<ARGS>(args) ...)
{}
class Handler
{
std::unique_lock<mutex_t> m_lock; // unique lock
resource_t &m_resource; // Ref to resource
friend class LockedResource;
Handler(mutex_t &a_mutex, resource_t &a_resource) :
m_lock(a_mutex), // mutex automatically locked
m_resource(a_resource)
{ std::cout << "Resource locked\n"; }
public:
Handler(Handler &&a_handler) :
m_lock(std::move(a_handler.m_lock)),
m_resource(a_handler.m_resource)
{ std::cout << "Moved\n"; }
~Handler() // mutex automatically unlocked
{ std::cout << "Resource unlocked\n"; }
RESOURCE *operator->()
{ return &m_resource; }
};
Handler get()
{ return {m_mutex, m_resource}; }
};
template <typename T> using Resource = LockedResource<std::mutex, T>;
The idea behind this code is to wrap an object and protect it from multiple access from multiple threads; the wrapped object have private visibility and the only way to access it is through the internal class Handler, the expected usage is the following:
LockedResource<std::mutex, Foo> locked_foo;
void f()
{
auto handler = locked_foo.get(); // this will lock the locked_foo.m_mutex;
handler->some_foo_method();
// going out of the scope will call the handler dtor and
// unlock the locked_foo.m_mutex;
}
So, if I'm not mistaken, calling the LockedResource::get method creates a LockedResource::Handle value which locks the LockedResource::m_mutex for the entire lifetime of the Handle... but I must be mistaken because the code below doesn't cause a deadlock:
LockedResource<std::mutex, std::vector<int>> locked_vector{10, 10};
int main()
{
/*1*/ auto vec = locked_vector.get(); // vec = Resource<vector>::Handler
/*2*/ std::cout << locked_vector.get()->size() << '\n';
/*3*/ std::cout << vec->size() << '\n';
return 0;
}
I was expecting the line /*1*/ to lock the locked_vector.m_mutex and then the line /*2*/ try to lock the same already locked mutex causing deadlock, but the output is the following:
Resource locked
Resource locked
10
Resource unlocked
10
Resource unlocked
Shouldn't the second ::get() lead to a deadlock?
I'm accessing the wrapped resource through the same lock or I am misunderstanding something?
Here is the example code.

Well, quick tests show the following:
GCC - shows the output which is shown in the question
Clang - process killed on the online compiler I've used. So deadlock.
MSVC2013 - "device or resource busy: device or resource busy" - is thrown. It detected an attempt to lock the already locked mutex on the same thread.
What standard has to say about it?
30.4.1.2.1/4 [ Note: A program may deadlock if the thread that owns a mutex object calls lock() on that object. If the implementation
can detect the deadlock, a resource_deadlock_would_occur error condition may be observed. — end note ]
But according to 30.4.1.2/13 it should throw one of these:
— resource_deadlock_would_occur — if the implementation detects that a deadlock would occur.
— device_or_resource_busy — if the mutex is already locked and blocking is not possible.
So the answer is yes, what you observe is an incorrect behavior. It should either block or throw but not proceed as nothing has happened.
The behavior observed is possible since you have UB in the code. According to 17.6.4.11, violation of a Requires clause is UB and in 30.4.1.2/7 we have the following requirement:
Requires: If m is of type std::mutex, std::timed_mutex, or
std::shared_timed_mutex, the calling thread does not own the mutex.
Thanks to #T.C. for pointing out about UB.

I'm not familiar with this specific mutex/resource implementation, but it's common for such synchronization primitives to contain a LOCK COUNT, and to allow the same thread to lock the same object multiples times.
When the mutex has been unlocked the same number of times as it was locked, then another thread is free to lock it.

Related

Default destructor blocked on a mutex

I got code like below for an experiment:
class Foo {
public:
Foo() : mThread(&Foo::test, this) {
}
private:
std::thread mThread;
std::mutex mMutex;
std::condition_variable mCv;
void test() {
std::unique_lock<std::mutex> lock(mMutex);
mCv.wait(lock);
}
};
int main() {
Foo foo;
usleep(1000);
std::cout << "wake up" << std::endl;
}
And the code will block at the destructor of Foo, after debugging, I found out that the thread is blocked on futex_wait, but double check the document of wait() of condition_variable, it says that
At the moment of blocking the thread, the function automatically calls
lck.unlock(), allowing other locked threads to continue.
So why my code still blocked at mMutex since mCv has unlocked?
P.S. I know this design pattern has problems, I just curious if I'm missing some knowledge about thread conditions.
Your program exhibits undefined behavior. At the closing brace of main, foo is destroyed, and so are its member variables, beginning with mCv. The standard says:
[thread.condition.condvar]/5
~condition_variable();
Requires: There shall be no thread blocked on *this. [ Note: That is, all threads shall have been notified... —end note ]
Your program violates the Requires clause, as test in another thread waits on mCv and has not been notified.

How to start an empty thread using c++ [duplicate]

I'm getting into C++11 threads and have run into a problem.
I want to declare a thread variable as global and start it later.
However all the examples I've seen seem to start the thread immediately for example
thread t(doSomething);
What I want is
thread t;
and start the thread later.
What I've tried is
if(!isThreadRunning)
{
thread t(readTable);
}
but now t is block scope. So I want to declare t and then start the thread later so that t is accessible to other functions.
Thanks for any help.
std::thread's default constructor instantiates a std::thread without starting or representing any actual thread.
std::thread t;
The assignment operator moves the state of a thread object, and sets the assigned-from thread object to its default-initialized state:
t = std::thread(/* new thread code goes here */);
This first constructs a temporary thread object representing a new thread, transfers the new thread representation into the existing thread object that has a default state, and sets the temporary thread object's state to the default state that does not represent any running thread. Then the temporary thread object is destroyed, doing nothing.
Here's an example:
#include <iostream>
#include <thread>
void thread_func(const int i) {
std::cout << "hello from thread: " << i << std::endl;
}
int main() {
std::thread t;
std::cout << "t exists" << std::endl;
t = std::thread{ thread_func, 7 };
t.join();
std::cout << "done!" << std::endl;
}
As antred says in his answer, you can use a condition variable to make the thread to wait in the beginning of its routine.
Scott Meyers in his book “Effective Modern C++” (in the “Item 39: Consider void futures for one-shot event communication”) proposes to use void-future instead of lower level entities (boolean flag, conditional variable and mutex). So the problem can be solved like this:
auto thread_starter = std::promise<void>;
auto thread = std::thread([starter_future = thread_starter.get_future()]() mutable {
starter_future.wait(); //wait before starting actual work
…; //do actual work
});
…; //you can do something, thread is like “paused” here
thread_starter.set_value(); //“start” the thread (break its initial waiting)
Scott Meyers also warns about exceptions in the second … (marked by the you can do something, thread is like “paused” here comment). If thread_starter.set_value() is never called for some reasons (for example, due to exception throws in the second …), the thread will wait forever, and any attempt to join it would result in deadlock.
As both ways (condvar-based and future-based) contain hidden unsafety, and the first way (condvar-based) needs some boilerplate code, I propose to write a wrapper class around std::thread. Its interface should be similar to the one of std::thread (except that its instances should be assignable from other instances of the same class, not from std::thread), but contain additional void start() method.
Future-based thread-wrapper
class initially_suspended_thread {
std::promise<bool> starter;
std::thread impl;
public:
template<class F, class ...Args>
explicit initially_suspended_thread(F &&f, Args &&...args):
starter(),
impl([
starter_future = starter.get_future(),
routine = std::bind(std::forward<F>(f), std::forward<Args>(args)...)
]() mutable {if (starter_future.get()) routine();})
{}
void start() {starter.set_value(true);}
~initially_suspended_thread() {
try {starter.set_value(false);}
catch (const std::future_error &exc) {
if (exc.code() != std::future_errc::promise_already_satisfied) throw;
return; //already “started”, no need to do anything
}
impl.join(); //auto-join not-yet-“started” threads
}
…; //other methods, trivial
};
Condvar-based thread-wrapper
class initially_suspended_thread {
std::mutex state_mutex;
enum {INITIAL, STARTED, ABORTED} state;
std::condition_variable state_condvar;
std::thread impl;
public:
template<class F, class ...Args>
explicit initially_suspended_thread(F &&f, Args &&...args):
state_mutex(), state(INITIAL), state_condvar(),
impl([
&state_mutex = state_mutex, &state = state, &state_condvar = state_condvar,
routine = std::bind(std::forward<F>(f), std::forward<Args>(args)...)
]() {
{
std::unique_lock state_mutex_lock(state_mutex);
state_condvar.wait(
state_mutex_lock,
[&state]() {return state != INITIAL;}
);
}
if (state == STARTED) routine();
})
{}
void start() {
{
std::lock_guard state_mutex_lock(state_mutex);
state = STARTED;
}
state_condvar.notify_one();
}
~initially_suspended_thread() {
{
std::lock_guard state_mutex_lock(state_mutex);
if (state == STARTED) return; //already “started”, no need to do anything
state = ABORTED;
}
impl.join(); //auto-join not-yet-“started” threads
}
…; //other methods, trivial
};
There is no "standard" of creating a thread "suspended" which I assume is what you wanted to do with the C++ thread library. Because it is not supported on every platform that has threads, it is not there in the C++ API.
You might want to create a class with all the data it is required but not actually run your thread function. This is not the same as creating the thread but may be what you want. If so, create that, then later bind the object and its operator() or start() function or whatever to the thread.
You might want the thread id for your thread. That means you do actually need to start the thread function. However it can start by waiting on a condition variable. You then signal or broadcast to that condition variable later when you want it to continue running. Of course you can have the function check a condition after it resumes in case you might have decided to close it and not run it after all (in which case it will just return instantly).
You might want a std::thread object with no function. You can do that and attach it to a function later to run that function in a new thread.
I would give the thread a condition variable and a boolean called startRunning (initially set to false). Effectively you would start the thread immediately upon creation, but the first thing it would do is suspend itself (using the condition_variable) and then only begin processing its actual task when the condition_variable is signaled from outside (and the startRunning flag set to true).
EDIT: PSEUDO CODE:
// in your worker thread
{
lock_guard l( theMutex );
while ( ! startRunning )
{
cond_var.wait( l );
}
}
// now start processing task
// in your main thread (after creating the worker thread)
{
lock_guard l( theMutex );
startRunning = true;
cond_var.signal_one();
}
EDIT #2: In the above code, the variables theMutex, startRunning and cond_var must be accessible by both threads. Whether you achieve that by making them globals or by encapsulating them in a struct / class instance is up to you.
first declared in class m_grabber runs nothing. We assign member class object with new one with lambda function in launch_grabber method and thread with lambda runs within source class context.
class source {
...
std::thread m_grabber;
bool m_active;
...
}
bool source::launch_grabber() {
// start grabber
m_grabber = std::thread{
[&] () {
m_active = true;
while (true)
{
if(!m_active)
break;
// TODO: something in new thread
}
}
};
m_grabber.detach();
return true;
}
You could use singleton pattern. Or I would rather say antipattern.
Inside a singleton you would have std::thread object encapsulated. Upon first access to singleton your thread will be created and started.

Delayed start of a thread in C++ 11

I'm getting into C++11 threads and have run into a problem.
I want to declare a thread variable as global and start it later.
However all the examples I've seen seem to start the thread immediately for example
thread t(doSomething);
What I want is
thread t;
and start the thread later.
What I've tried is
if(!isThreadRunning)
{
thread t(readTable);
}
but now t is block scope. So I want to declare t and then start the thread later so that t is accessible to other functions.
Thanks for any help.
std::thread's default constructor instantiates a std::thread without starting or representing any actual thread.
std::thread t;
The assignment operator moves the state of a thread object, and sets the assigned-from thread object to its default-initialized state:
t = std::thread(/* new thread code goes here */);
This first constructs a temporary thread object representing a new thread, transfers the new thread representation into the existing thread object that has a default state, and sets the temporary thread object's state to the default state that does not represent any running thread. Then the temporary thread object is destroyed, doing nothing.
Here's an example:
#include <iostream>
#include <thread>
void thread_func(const int i) {
std::cout << "hello from thread: " << i << std::endl;
}
int main() {
std::thread t;
std::cout << "t exists" << std::endl;
t = std::thread{ thread_func, 7 };
t.join();
std::cout << "done!" << std::endl;
}
As antred says in his answer, you can use a condition variable to make the thread to wait in the beginning of its routine.
Scott Meyers in his book “Effective Modern C++” (in the “Item 39: Consider void futures for one-shot event communication”) proposes to use void-future instead of lower level entities (boolean flag, conditional variable and mutex). So the problem can be solved like this:
auto thread_starter = std::promise<void>;
auto thread = std::thread([starter_future = thread_starter.get_future()]() mutable {
starter_future.wait(); //wait before starting actual work
…; //do actual work
});
…; //you can do something, thread is like “paused” here
thread_starter.set_value(); //“start” the thread (break its initial waiting)
Scott Meyers also warns about exceptions in the second … (marked by the you can do something, thread is like “paused” here comment). If thread_starter.set_value() is never called for some reasons (for example, due to exception throws in the second …), the thread will wait forever, and any attempt to join it would result in deadlock.
As both ways (condvar-based and future-based) contain hidden unsafety, and the first way (condvar-based) needs some boilerplate code, I propose to write a wrapper class around std::thread. Its interface should be similar to the one of std::thread (except that its instances should be assignable from other instances of the same class, not from std::thread), but contain additional void start() method.
Future-based thread-wrapper
class initially_suspended_thread {
std::promise<bool> starter;
std::thread impl;
public:
template<class F, class ...Args>
explicit initially_suspended_thread(F &&f, Args &&...args):
starter(),
impl([
starter_future = starter.get_future(),
routine = std::bind(std::forward<F>(f), std::forward<Args>(args)...)
]() mutable {if (starter_future.get()) routine();})
{}
void start() {starter.set_value(true);}
~initially_suspended_thread() {
try {starter.set_value(false);}
catch (const std::future_error &exc) {
if (exc.code() != std::future_errc::promise_already_satisfied) throw;
return; //already “started”, no need to do anything
}
impl.join(); //auto-join not-yet-“started” threads
}
…; //other methods, trivial
};
Condvar-based thread-wrapper
class initially_suspended_thread {
std::mutex state_mutex;
enum {INITIAL, STARTED, ABORTED} state;
std::condition_variable state_condvar;
std::thread impl;
public:
template<class F, class ...Args>
explicit initially_suspended_thread(F &&f, Args &&...args):
state_mutex(), state(INITIAL), state_condvar(),
impl([
&state_mutex = state_mutex, &state = state, &state_condvar = state_condvar,
routine = std::bind(std::forward<F>(f), std::forward<Args>(args)...)
]() {
{
std::unique_lock state_mutex_lock(state_mutex);
state_condvar.wait(
state_mutex_lock,
[&state]() {return state != INITIAL;}
);
}
if (state == STARTED) routine();
})
{}
void start() {
{
std::lock_guard state_mutex_lock(state_mutex);
state = STARTED;
}
state_condvar.notify_one();
}
~initially_suspended_thread() {
{
std::lock_guard state_mutex_lock(state_mutex);
if (state == STARTED) return; //already “started”, no need to do anything
state = ABORTED;
}
impl.join(); //auto-join not-yet-“started” threads
}
…; //other methods, trivial
};
There is no "standard" of creating a thread "suspended" which I assume is what you wanted to do with the C++ thread library. Because it is not supported on every platform that has threads, it is not there in the C++ API.
You might want to create a class with all the data it is required but not actually run your thread function. This is not the same as creating the thread but may be what you want. If so, create that, then later bind the object and its operator() or start() function or whatever to the thread.
You might want the thread id for your thread. That means you do actually need to start the thread function. However it can start by waiting on a condition variable. You then signal or broadcast to that condition variable later when you want it to continue running. Of course you can have the function check a condition after it resumes in case you might have decided to close it and not run it after all (in which case it will just return instantly).
You might want a std::thread object with no function. You can do that and attach it to a function later to run that function in a new thread.
I would give the thread a condition variable and a boolean called startRunning (initially set to false). Effectively you would start the thread immediately upon creation, but the first thing it would do is suspend itself (using the condition_variable) and then only begin processing its actual task when the condition_variable is signaled from outside (and the startRunning flag set to true).
EDIT: PSEUDO CODE:
// in your worker thread
{
lock_guard l( theMutex );
while ( ! startRunning )
{
cond_var.wait( l );
}
}
// now start processing task
// in your main thread (after creating the worker thread)
{
lock_guard l( theMutex );
startRunning = true;
cond_var.signal_one();
}
EDIT #2: In the above code, the variables theMutex, startRunning and cond_var must be accessible by both threads. Whether you achieve that by making them globals or by encapsulating them in a struct / class instance is up to you.
first declared in class m_grabber runs nothing. We assign member class object with new one with lambda function in launch_grabber method and thread with lambda runs within source class context.
class source {
...
std::thread m_grabber;
bool m_active;
...
}
bool source::launch_grabber() {
// start grabber
m_grabber = std::thread{
[&] () {
m_active = true;
while (true)
{
if(!m_active)
break;
// TODO: something in new thread
}
}
};
m_grabber.detach();
return true;
}
You could use singleton pattern. Or I would rather say antipattern.
Inside a singleton you would have std::thread object encapsulated. Upon first access to singleton your thread will be created and started.

Why might this thread management pattern result in a deadlock?

I'm using a common base class has_threads to manage any type that should be allowed to instantiate a boost::thread.
Instances of has_threads each own a set of threads (to support waitAll and interruptAll functions, which I do not include below), and should automatically invoke removeThread when a thread terminates to maintain this set's integrity.
In my program, I have just one of these. Threads are created on an interval every 10s, and each performs a database lookup. When the lookup is complete, the thread runs to completion and removeThread should be invoked; with a mutex set, the thread object is removed from internal tracking. I can see this working properly with the output ABC.
Once in a while, though, the mechanisms collide. removeThread is executed perhaps twice concurrently. What I can't figure out is why this results in a deadlock. All thread invocations from this point never output anything other than A. [It's worth noting that I'm using thread-safe stdlib, and that the issue remains when IOStreams are not used.] Stack traces indicate that the mutex is locking these threads, but why would the lock not be eventually released by the first thread for the second, then the second for the third, and so on?
Am I missing something fundamental about how scoped_lock works? Is there anything obvious here that I've missed that could lead to a deadlock, despite (or even due to?) the use of a mutex lock?
Sorry for the poor question, but as I'm sure you're aware it's nigh-on impossible to present real testcases for bugs like this.
class has_threads {
protected:
template <typename Callable>
void createThread(Callable f, bool allowSignals)
{
boost::mutex::scoped_lock l(threads_lock);
// Create and run thread
boost::shared_ptr<boost::thread> t(new boost::thread());
// Track thread
threads.insert(t);
// Run thread (do this after inserting the thread for tracking so that we're ready for the on-exit handler)
*t = boost::thread(&has_threads::runThread<Callable>, this, f, allowSignals);
}
private:
/**
* Entrypoint function for a thread.
* Sets up the on-end handler then invokes the user-provided worker function.
*/
template <typename Callable>
void runThread(Callable f, bool allowSignals)
{
boost::this_thread::at_thread_exit(
boost::bind(
&has_threads::releaseThread,
this,
boost::this_thread::get_id()
)
);
if (!allowSignals)
blockSignalsInThisThread();
try {
f();
}
catch (boost::thread_interrupted& e) {
// Yes, we should catch this exception!
// Letting it bubble over is _potentially_ dangerous:
// http://stackoverflow.com/questions/6375121
std::cout << "Thread " << boost::this_thread::get_id() << " interrupted (and ended)." << std::endl;
}
catch (std::exception& e) {
std::cout << "Exception caught from thread " << boost::this_thread::get_id() << ": " << e.what() << std::endl;
}
catch (...) {
std::cout << "Unknown exception caught from thread " << boost::this_thread::get_id() << std::endl;
}
}
void has_threads::releaseThread(boost::thread::id thread_id)
{
std::cout << "A";
boost::mutex::scoped_lock l(threads_lock);
std::cout << "B";
for (threads_t::iterator it = threads.begin(), end = threads.end(); it != end; ++it) {
if ((*it)->get_id() != thread_id)
continue;
threads.erase(it);
break;
}
std::cout << "C";
}
void blockSignalsInThisThread()
{
sigset_t signal_set;
sigemptyset(&signal_set);
sigaddset(&signal_set, SIGINT);
sigaddset(&signal_set, SIGTERM);
sigaddset(&signal_set, SIGHUP);
sigaddset(&signal_set, SIGPIPE); // http://www.unixguide.net/network/socketfaq/2.19.shtml
pthread_sigmask(SIG_BLOCK, &signal_set, NULL);
}
typedef std::set<boost::shared_ptr<boost::thread> > threads_t;
threads_t threads;
boost::mutex threads_lock;
};
struct some_component : has_threads {
some_component() {
// set a scheduler to invoke createThread(bind(&some_work, this)) every 10s
}
void some_work() {
// usually pretty quick, but I guess sometimes it could take >= 10s
}
};
Well, a deadlock might occurs if the same thread lock a mutex it has already locked (unless you use a recursive mutex).
If the release part is called a second time by the same thread as it seems to happen with your code, you have a deadlock.
I have not studied your code in details, but you probably have to re-design your code (simplify ?) to be sure that a lock can not be acquired twice by the same thread. You can probably use a safeguard checking for the ownership of the lock ...
EDIT:
As said in my comment and in IronMensan answer, one possible case is that the thread stop during creation, the at_exit being called before the release of the mutex locked in the creation part of your code.
EDIT2:
Well, with mutex and scoped lock, I can only imagine a recursive lock, or a lock that is not released. It can happen if a loop goes to infinite due to a memory corruption for instance.
I suggest to add more logs with a thread id to check if there is a recursive lock or something strange. Then I will check that my loop is correct. I will also check that the at_exit is only called once per thread ...
One more thing, check the effect of erasing (thus calling the destructor) of a thread while being in the at_exit function...
my 2 cents
You may need to do something like this:
void createThread(Callable f, bool allowSignals)
{
// Create and run thread
boost::shared_ptr<boost::thread> t(new boost::thread());
{
boost::mutex::scoped_lock l(threads_lock);
// Track thread
threads.insert(t);
}
//Do not hold threads_lock while starting the new thread in case
//it completes immediately
// Run thread (do this after inserting the thread for tracking so that we're ready for the on-exit handler)
*t = boost::thread(&has_threads::runThread<Callable>, this, f, allowSignals);
}
In other words, use thread_lock exclusively to protect threads.
Update:
To expand on something in the comments with speculation about how boost::thread works, the lock patterns could look something like this:
createThread:
(createThread) obtain threads_lock
(boost::thread::opeator =) obtain a boost::thread internal lock
(boost::thread::opeator =) release a boost::thread internal lock
(createThread) release threads_lock
thread end handler:
(at_thread_exit) obtain a boost::thread internal lock
(releaseThread) obtain threads_lock
(releaseThread) release threads_lock
(at_thread_exit) release a boost:thread internal lock
If those two boost::thread locks are the same lock, the potential for deadlock is clear. But this is speculation because much of the boost code scares me and I try not to look at it.
createThread could/should be reworked to move step 4 up between steps one and two and eliminate the potential deadlock.
It is possible that the created thread is finishing before or during the assignment operator in createThread is complete. Using an event queue or some other structure that is might be necessary. Though a simpler, though hack-ish, solution might work as well. Don't change createThread since you have to use threads_lock to protect threads itself and the thread objects it points to. Instead change runThread to this:
template <typename Callable>
void runThread(Callable f, bool allowSignals)
{
//SNIP setup
try {
f();
}
//SNIP catch blocks
//ensure that createThread is complete before this thread terminates
boost::mutex::scoped_lock l(threads_lock);
}

How is it possible to lock a GMutex twice?

I have a test program that I wrote to try and debug a GMutex issue that I am having and I cannot seem to figure it out. I am using the class below to lock and unlock a mutex within a scoped context. This is similar to BOOST's guard.
/// #brief Helper class used to create a mutex.
///
/// This helper Mutex class will lock a mutex upon creation and unlock when deleted.
/// This class may also be referred to as a guard.
///
/// Therefore this class allows scoped access to the Mutex's locking and unlocking operations
/// and is good practice since it ensures that a Mutex is unlocked, even if an exception is thrown.
///
class cSessionMutex
{
GMutex* apMutex;
/// The object used for logging.
mutable cLog aLog;
public:
cSessionMutex (GMutex *ipMutex) : apMutex(ipMutex), aLog ("LOG", "->")
{
g_mutex_lock(apMutex);
aLog << cLog::msDebug << "MUTEX LOCK " << apMutex << "," << this << cLog::msEndL;
}
~cSessionMutex ()
{
aLog << cLog::msDebug << "MUTEX UNLOCK " << apMutex << "," << this << cLog::msEndL;
g_mutex_unlock(apMutex);
}
};
Using this class, I call it as follows:
bool van::cSessionManager::RegisterSession(const std::string &iSessionId)
{
cSessionMutex lRegistryLock (apRegistryLock);
// SOME CODE
}
where apRegistryLock is a member variable of type GMutex* and is initialized using g_mutex_new() before I ever call RegisterSession.
With this said, when I run the application with several threads, I sometimes notice at the beginning, when RegisterSession is called for the first few times that the log (from the constructor above)
[DEBUG] LOG.-> - MUTEX LOCK 0x26abb40,0x7fc14ad7ae10
[DEBUG] LOG.-> - MUTEX LOCK 0x26abb40,0x7fc14af7ce10
is logged twice in a row with the same mutex but different instance; therefore, suggesting that the mutex is being locked twice or the second lock is simply being ignored - which is seriously bad.
Moreover, it is worth noting that I also check to see if these logs were initiated from the same thread using the g_thread_self() function, and this returned two separate thread identifiers; thus, suggesting that the mutex was locked twice from separate threads.
So my question is, how is it possible for this to occur?
If it's called twice in the same call chain in the same thread this could happen. The second lock is typically (although not always) ignored. At least in pthreads it's possible to configure multiple locks as counted.
What was happening in my case was there was another thread that was calling g_cond_timed_wait function with the same mutex, but with the mutex unlocked. In this case, g_cond_timed_wait function unlocks a mutex that is not locked and leaves the mutex in an undefined state, which explains why I was seeing the behaviour explained in this question: the mutex being locked twice.