Recently I am playing around Boost mutex and I am not sure if I can do the following:
boost::recursive_mutex ListLock;
int main(){
ListLock.lock();
function1();
ListLock.unlock();
}
void function1(){
ListLock.lock();
//some operations
ListLock.unlock();
}
Is it okay to lock the ListLock twice?
It depends on the lock. Recursive locks allow ... recursive locks. So, no problem in your example.
Non-recursive locks (such as std::mutex or boost::mutex) would deadlock (technically, I think the behaviour is unspecified, but most POSIX thread implementations either deadlock or raise an exception based on EDEADLK).
See http://en.cppreference.com/w/cpp/thread/recursive_mutex for recursive_mutex
Related
I have multi-threaded program and it only has 1 mutex. I want to make the program terminate if it attempts to grab the lock if it already has it.
My actual program is pretty complicated. Of course we try to avoid deadlocks on a programming level. But just in case we miss a edge case, we prefer it to fail immediately rather than deadlock.
See a minimal example below.
std::mutex lock;
void f1() {
std::lock_guard<mutex> guard1(lock);
// some code ...
}
void f2() {
std::lock_guard<mutex> guard2(lock);
f1(); // Will deadlock here! How can I make it terminate instead of deadlock?
}
std::mutex provides no mechanism for accomplishing what you describe.
Instead of deadlock detection, you should be looking at deadlock avoidance. Careful programming, including appropriate choice of mutex role and scope, can help. Refactoring could help, too. For example, you could change your example code to
std::mutex lock;
static void f1_impl() {
// some code ...
}
void f1() {
std::lock_guard<mutex> guard1(lock);
f1_impl();
}
void f2() {
std::lock_guard<mutex> guard2(lock);
f1_impl();
}
But if that doesn't get you all the way to where you want to be, then you might want to consider using std::recursive_mutex instead of std::mutex. That addresses the problem by allowing a thread that already holds the mutex locked to lock it again.
The following code hangs because of multiple calls to acquire a non-recursive mutex:
#include <pthread.h>
class Lock
{
public:
Lock( pthread_mutex_t& mutex )
: mutex_( mutex )
{
pthread_mutex_lock( &mutex_ );
}
~Lock()
{
pthread_mutex_unlock( &mutex_ );
}
private:
pthread_mutex_t& mutex_;
};
class Foo
{
public:
Foo()
{
pthread_mutex_init( &mutex_, NULL );
}
~Foo()
{
pthread_mutex_destroy( &mutex_ );
}
void hang()
{
Lock l( mutex_ );
subFunc();
}
void subFunc()
{
Lock l( mutex_ );
}
private:
pthread_mutex_t mutex_;
};
int main()
{
Foo f;
f.hang();
}
Is there a word or phrase for this situation? I'm not sure, but I don't think this can properly be called a deadlock: I'm of the understanding that a deadlock proper refers to the stalemate resulting from impassably ordered acquisition of multiple shared resources.
I've been anecdotally calling this a "single mutex deadlock" but I'd like to learn if there is a more proper term/phrase for this.
The Wikipedia article on reentrant mutexes cites Pattern-Oriented Software Architecture, which uses the term "self-deadlock." This term seems pretty reasonable to me!
...mutexes come in two basic flavors: recursive and non-recursive. A recursive mutex allows re-entrant locking, in which a thread that has already locked a mutex can lock it again and progress. Non-recursive mutexes, in contrast, cannot: a second lock in the same thread results in self-deadlock. Non-recursive mutexes can potentially be much faster to lock and unlock than recursive mutexes, but the risk of self-deadlock means that care must be taken when an object calls any methods on itself, either directly or via a callback, because double-locking will cause the thread to hang.
(emphasis added)
Various search results across a variety of technologies corroborate the use of this term.
https://docs.oracle.com/cd/E19253-01/816-5137/guide-35930/index.html
https://support.microsoft.com/en-in/help/2963138/fix-parallel-deadlock-or-self-deadlock-occurs-when-you-run-a-query-tha
https://issues.apache.org/jira/browse/DERBY-6692
https://github.com/citusdata/citus/issues/1572
"self deadlock" or "recursive deadlock".
According to the manual this is undefined behavior to lock a default initialized mutex twice from the same thread:
If the mutex type is PTHREAD_MUTEX_DEFAULT, attempting to recursively lock the mutex results in undefined behavior.
If I have a global array that multiple threads are writing to and reading from, and I want to ensure that this array remains synchronized between threads, is using std::mutex enough for this purpose, as shown in pseudo code below? I came across with this resource, which makes me think that the answer is positive:
Mutual exclusion locks (such as std::mutex or atomic spinlock) are an example of release-acquire synchronization: when the lock is released by thread A and acquired by thread B, everything that took place in the critical section (before the release) in the context of thread A has to be visible to thread B (after the acquire) which is executing the same critical section.
I'm still interested in other people's opinion.
float * globalArray;
std::mutex globalMutex;
void method1()
{
std::lock_guard<std::mutex> lock(globalMutex);
// Perform reads/writes to globalArray
}
void method2()
{
std::lock_guard<std::mutex> lock(globalMutex);
// Perform reads/writes to globalArray
}
main()
{
std::thread t1(method1());
std::thread t2(method2());
std::thread t3(method1());
std::thread t4(method2());
...
std::thread tn(method1());
}
This is precisely what mutexes are for. Just try not to hold them any longer than necessary to minimize the costs of contention.
I'm trying to replace the boost functionalities with STL functinoalities in C++11.
There is a write function in my multi-thread application.
First, the function verifies the data. Next, writes to it.
There are two locks mentioned as mentioned below:
class Data
{
mutable boost::mutex mut;
void Data::Write()
{
boost::upgrade_lock<boost::shared_mutex> ulock(mut);
// ... Verification statements
boost::upgrade_to_unique_lock<boost::shared_mutex> lock(ulock);
// ... Writing statements
}
};
I'm a novice of boost functionalities. Can you please explain what it does and how I can achieve the functionality with STL features?
C++11 doesn't provide shared locks at all. C++14 does, but doesn't allow them to be upgraded to exclusive locks; you'd need a second mutex for that, something like:
mutable std::shared_timed_mutex read_mutex;
std::mutex write_mutex;
void Write() {
std::shared_lock read_lock(read_mutex);
// ... Verification statements
std::lock_guard write_lock(write_mutex);
// ... Writing statements
}
You'll need to take some care only to acquire the write lock while already holding the read lock, to avoid deadlocks. If you have a working Boost solution, it might be better to stick to that until the standard library provides equivalent functionality.
In C++ threads you get:
#include <thread>
#include <mutex>
using namespace std;
mutex mu;
// lock_guard: acquire the mutex mu and lock it. when the lock_guard object goes out of scope, mutex is unlocked.
lock_guard<mutex> lock(mu);
// more flexible than the lock_guard
// http://en.cppreference.com/w/cpp/thread/unique_lock
unique_lock<mutex> ulock(mu);
ulock.unlock();
ulock.lock();
Do all mutex implementations ultimately call the same basic system/hardware calls - meaning that they can be interchanged?
Specifically, if I'm using __gnu_parallel algorithms (that uses openmp) and I want to make the classes they call threadsafe may I use boost::mutex for the locking? or must I write my own mutex such as the one described here
//An openmp mutex. Can this be replaced with boost::mutex?
class Mutex {
public:
Mutex() { omp_init_lock(&_mutex); }
~Mutex() { omp_destroy_lock(&_mutex); }
void lock() { omp_set_lock(&_mutex); }
void unlock() { omp_unset_lock(&_mutex); }
private:
omp_lock_t _mutex;
};
Edit, the link above to the openmp mutex seems to be broken, for anyone interested, the lock that goes with this mutex is along these lines
class Lock
{
public:
Lock(Mutex& mutex)
: m_mutex(mutex),
m_release(false)
{
m_mutex.lock();
}
~Lock()
{
if (!m_release)
m_mutex.unlock();
}
bool operator() const
{
return !m_release;
}
void release()
{
if (!m_release)
{
m_release = true;
m_mutex.unlock();
}
}
private:
Mutex& m_mutex;
bool m_release;
};
This link provides a useful discussion:
http://groups.google.com/group/comp.programming.threads/browse_thread/thread/67e7b9b9d6a4b7df?pli=1
Paraphrasing, (at least on Linux) Boost::Thread and OpenMP both an interface to pthread and so in principle should be able to be mixed (as Anders says +1), but mixing threading technologies in this way is generally a bad idea (as Andy says, +1).
You should not mix synchronization mechanisms. E.g. current pthreads mutex implementation is based on futex and it is different from previous pthreads implementations (see man 7 pthreads). If you create your own level of abstraction, you should use it. It should be considered what is your need - inter-thread or inter-process synchronization?
If you need cooperation with code that uses boost::mutex, you should use boost::mutex in place of open mp.
Additionally IMHO it is quite strange to use open mp library functions to realize mutex.
The part requiring compatibility is the thread suspension, rescheduling and context switching. As long as the threads are real threads, scheduled by the operating system, you should be able to use any mutex implementation that relies on some kind of kerner primitive for suspending and resuming a waiting thread.