scoped_lock inside lock_guard, is it redundant? - c++

I'm new in multi-thread programming.
I'm now doing a serial port communication project, and search related codes for reference.
I found a code that someone used scoped_lock inside lock_guard as below:
void B(){
boost::mutex::scoped_lock b_lock(b_mutex);
/* do something */
}
void A(){
const std::lock_guard<std::mutex> a_lock(a_mutex);
/* do something */
B();
}
According to this post, I think these two are just to lock mutex. And B() is called by A(), so maybe scoped_lock inside B() is not needed, and can be removed. Is it right?

They lock different mutexes. Whether this makes sense depends on what is do something. For example it could be:
void B(){
boost::mutex::scoped_lock b_lock(b_mutex);
/* do something that needs b_mutex locked */
}
void A(){
const std::lock_guard<std::mutex> a_lock(a_mutex);
/* do something that needs a_mutex locked */
B();
}
It seems like A could be changed to
void A(){
{
const std::lock_guard<std::mutex> a_lock(a_mutex);
/* do something that needs a_mutex locked */
}
B();
}
But whether this is still correct depends on details that were left out from the posted code.
Locking two different mutexes is not redundant, because other threads may lock only one of them.

Related

Qt/C++: Recursive mutex, 'sync zones' and blocking signals

Firstly I'd like to point out that I've looked this up but can't find the answer I'm looking for/have got confused with overly detailed answers.
I have a program which uses two threads. A Boolean values need to be set and read in Thread A but only read in Thread B.
Thread A:
Module::Module(){
}
void Module::foo(){
mutex.lock();
bool request = true;
mutex.unlock();
}
void Module::bar(){
mutex.lock();
if (request){
mutex.unlock();
// do stuff
}else{
mutex.unlock();
}
}
Thread B:
Provider::Provider(){
module = new Module; // pointer to class request 'lives in'
}
void Provider::foo(){
mutex.lock();
if (module->request){
mutex.unlock();
// do stuff
}
}else{
mutex.unlock();
}
}
My question might seem rather trivial, but it's bugged me. Thread A cannot read and write at the same time, thus i'd argue recursive mutex is not required for A. However, there is a small possibility foo() and bar() could get called simultaneous from Thread B (Signals and slots). Does this mean I need recursive mutex?
Also; is there any reason not to use a Qt::BlockingQueudConnection? A colleague argued that this is dangerous as it sends calling threads to sleep until the signal has executed the slot- but is this not sort of the same as mutex?
Furthermore; seen a post regarding structuring mutex (pthread mutex locking variables used in statements). In here it mentions making local copies on values. If i was to employ something similar for Thread A e.g.
mutex.lock();
requestCopy = request;
mutex.lock();
...
if(requestCopy){
// do stuff
}
Will this also block access of request whereever requestCopy is getting used? I was looking to use this style in my code for simplicity but this would not work if you read AND write in a thread?
Any help would be great.
From what you have shown, it looks like (rewritten)
Some module (Thread A):
class Module {
private:
bool request = false;
QMutex m;
public:
void set_request(bool b) {
QMutexLocker lock(&m);
request = b;
}
bool get_request() {
QMutexLocker lock(&m);
return request;
}
void bar() {
if (get_request()) {
// do stuff
}
}
};
Thread B:
class Provider {
public:
Provider() {
module = new Module();
}
void foo() {
if (module->get_request()){
// do stuff
}
}
private:
Module *module;
};
If this is really the case (and everything is fine this way), there is no need for a recursive mutex.

c++: Function that locks mutex for other function but can itself be executed in parallel

I have a question regarding thread safety and mutexes. I have two functions that may not be executed at the same time because this could cause problems:
std::mutex mutex;
void A() {
std::lock_guard<std::mutex> lock(mutex);
//do something (should't be done while function B is executing)
}
T B() {
std::lock_guard<std::mutex> lock(mutex);
//do something (should't be done while function A is executing)
return something;
}
Now the thing is, that function A and B should not be executed at the same time. That's why I use the mutex. However, it is perfectly fine if function B is called simultaneously from multiple threads. However, this is also prevented by the mutex (and I don't want this). Now, is there a way to ensure that A and B are not executed at the same time while still letting function B be executed multiple times in parallel?
If C++14 is an option, you could use a shared mutex (sometimes called "reader-writer" mutex). Basically, inside function A() you would acquire a unique (exclusive, "writer") lock, while inside function B() you would acquire a shared (non-exclusive, "reader") lock.
As long as a shared lock exists, the mutex cannot be acquired exclusively by other threads (but can be acquired non-exclusively); as long as an exclusive locks exist, the mutex cannot be acquired by any other thread anyhow.
The result is that you can have several threads concurrently executing function B(), while the execution of function A() prevents concurrent executions of both A() and B() by other threads:
#include <shared_mutex>
std::shared_timed_mutex mutex;
void A() {
std::unique_lock<std::shared_timed_mutex> lock(mutex);
//do something (should't be done while function B is executing)
}
T B() {
std::shared_lock<std::shared_timed_mutex> lock(mutex);
//do something (should't be done while function A is executing)
return something;
}
Notice, that some synchronization overhead will always be present even for concurrent executions of B(), and whether this will eventually give you better performance than using plain mutexes is highly dependent on what is going on inside and outside those functions - always measure before committing to a more complicated solution.
Boost.Thread also provides an implementation of shared_mutex.
You have an option in C++14.
Use std::shared_timed_mutex.
A would use lock, B would use lock_shared
This is quite possibly full of bugs, but since you have no C++14 you could create a lock-counting wrapper around std::mutex and use that:
// Lock-counting class
class SharedLock
{
public:
SharedLock(std::mutex& m) : count(0), shared(m) {}
friend class Lock;
// RAII lock
class Lock
{
public:
Lock(SharedLock& l) : lock(l) { lock.lock(); }
~Lock() { lock.unlock(); }
private:
SharedLock& lock;
};
private:
void lock()
{
std::lock_guard<std::mutex> guard(internal);
if (count == 0)
{
shared.lock();
}
++count;
}
void unlock()
{
std::lock_guard<std::mutex> guard(internal);
--count;
if (count == 0)
{
shared.unlock();
}
}
int count;
std::mutex& shared;
std::mutex internal;
};
std::mutex shared_mutex;
void A()
{
std::lock_guard<std::mutex> lock(shared_mutex);
// ...
}
void B()
{
static SharedLock shared_lock(shared_mutex);
SharedLock::Lock mylock(shared_lock);
// ...
}
... unless you want to dive into Boost, of course.

How to best write multiple functons that use the same mutexes

Suppose I have some code that looks like this:
g_mutex
void foo()
{
g_mutex.lock();
...
g_mutex.unlock()
}
void foobar()
{
g_mutex.lock();
...
g_mutex.unlock()
foo();
g_mutex.lock();
...
g_mutex.unlock()
}
Is there a pattern I can use such that in foobar() I can just lock the mutex once?
I can think of two solutions:
1. Use std::recursive_mutex
This way there's no problem if the same thread locks the mutex more than once, you don't have to unlock it before calling the function.
Use lock_guard or unique_lock though, don't litter your code with lock/unlock pairs.
2. Make foo() take a guard as argument
Rewrite foo() like this:
void foo(lock_guard<mutex>&)
{
// do foo stuff
}
This way it's impossible to call foo() without a mutex being locked. The lock_guard object is a token saying foo() can only be called with synchronization. Of course, it's still possible to mess it up by locking an unrelated mutex (which is rare if you are implementing the methods of a class, there's only one mutex visible to be locked).
You can see more details of this approach on this Andrei's pre-C++-11 article.
You can use std::lock_guard<std::mutex>:
void foobar()
{
std::lock_guard<std::mutex> guard(g_mutex);
// ...
} // releases g_mutex automatically
Usually, you rely on the mutex being reentrant - that is, it can be locked many times by the same thread:
void foo() {
g_mutex.lock();
// do foo stuff
g_mutex.unlock();
}
void foobar() {
g_mutex.lock();
foo();
g_mutex.unlock();
}
If you don't want that for some reason, there is a messier approach but it's not recommended. This would typically be done only in a class, where you can restrict access to private functions.
void foo_private()
{
// do foo stuff with the assumption that the lock is acquired.
}
void foo() {
g_mutex.lock();
foo_private();
g_mutex.unlock();
}
void foobar() {
g_mutex.lock();
foo_private();
g_mutex.unlock();
}
Also, as stated in the other answer to your question, you should use std::lock_guard to acquire the lock, as it will correctly unlock your object in the event of an exception (or if you forget to).

How to use recursive QMutex

I'm trying to use a recursive QMutex, i read the QMutex Class Reference but i not understand how to do it, can someone give me an example?
I need some way to lock QMutex that can be unlocked after or before the lock method is called.
If recursive mutex is not the way is there any other way?
To create a recursive QMutex you simply pass QMutex::Recursive at construction time, for instance:
QMutex mutex(QMutex::Recursive);
int number = 6;
void method1()
{
mutex.lock();
number *= 5;
mutex.unlock();
}
void method2()
{
mutex.lock();
number *= 3;
mutex.unlock();
}
Recursive means that you can lock several times the mutex from the same thread, you don't have to unlock it. If I understood well your question that's what you want.
Be careful, if you lock recursively you must call unlock the same amount of times. A better way to lock/unlock a mutex is using a QMutexLocker
#include <QMutexLocker>
QMutex mutex(QMutex::Recursive);
int number = 6;
void method1()
{
QMutexLocker locker(&mutex); // Here mutex is locked
number *= 5;
// Here locker goes out of scope.
// When locker is destroyed automatically unlocks mutex
}
void method2()
{
QMutexLocker locker(&mutex);
number *= 3;
}
A recursive mutex can be locked multiple times from a single thread without needing to be unlocked, as long as the same number of unlock calls are made from the same thread. This mechanism comes in handy when a shared resource is used by more then one function, and one of those functions call another function in which the resource is used.
Consider the following class:
class Foo {
public:
Foo();
void bar(); // Does something to the resource
void thud(); // Calls bar() then does something else to the resource
private:
Resource mRes;
QMutex mLock;
}
An initial implementation may look something like the following:
Foo::Foo() {}
void Foo::bar() {
QMutexLocker locker(&mLock);
mRes.doSomething();
}
void Foo::thud() {
QMutexLocker locker(&mLock);
bar();
mRes.doSomethingElse();
}
The above code will DEADLOCK on calls to thud. mLock will be acquired in the first line of thud() and once again by the first line of bar() which will block waiting for thud() to release the lock.
A simple solution would be to make the lock recursive in the ctor.
Foo::Foo() : mLock(QMutex::Recursive) {}
This an OK fix, and will be suitable for many situations, however one should be aware that there may be a performance penalty to using this solution since each recursive mutex call may require a system call to identify the current thread id.
In addition to the thread id check, all calls to thud() still execute QMutex::lock() twice!
Designs which require a recursive may be able to be refactored to eliminate the need for the recursive mutex. In general, the need for a recursive mutex is a "code smell" and indicates a need to adhere to the principle of separation of concerns.
For the class Foo, one could imagine creating a private function call which performs the shared computation and keeping the resource locking at the public interface level.
class Foo {
public:
Foo();
void bar(); // Does something to the resource
void thud(); // Does something then does something else to the resource
private:
void doSomething();
private:
Resource mRes;
QMutex mLock;
}
Foo::Foo() {}
// public
void Foo::bar() {
QMutexLocker locker(&mLock);
doSomething();
}
void Foo::thud() {
QMutexLocker locker(&mLock);
doSomething();
mRes.doSomethingElse();
}
// private
void Foo::doSomething() {
mRes.doSomething(); // Notice - no mutex in private function
}
Recursive mode just means that if a thread owns a mutex, and the same thread tries to lock the mutex again, that will succeed. The requirement is that calls to lock/unlock are balanced.
In non recursive mode, this will result in a deadlock.

waiting on a condition variable in a helper function that's called from the function that acquires the lock

I'm new to the boost threads library.
I have a situation where I acquire a scoped_lock in one function and need to wait on it in a callee.
The code is on the lines of:
class HavingMutex
{
public:
...
private:
static boost::mutex m;
static boost::condition_variable *c;
static void a();
static void b();
static void d();
}
void HavingMutex::a()
{
boost::mutex::scoped_lock lock(m);
...
b() //Need to pass lock here. Dunno how !
}
void HavingMutex::b(lock)
{
if (some condition)
d(lock) // Need to pass lock here. How ?
}
void HavingMutex::d(//Need to get lock here)
{
c->wait(lock); //Need to pass lock here (doesn't allow direct passing of mutex m)
}
Basically, in function d(), I need to access the scoped lock I acquired in a() so that I can wait on it. (Some other thread will notify).
Any help appreiciated. Thanks !
Have you tried simple reference? According to boost 1.41 documentation at http://www.boost.org/doc/libs/1_41_0/doc/html/thread/synchronization.html it should be all that is required.
...
void HavingMutex::a()
{
boost::mutex::scoped_lock lock(m);
...
b(lock);
}
void HavingMutex::b(boost::mutex::scoped_lock &lock)
{
if (some condition) // consider while (!some_condition) here
d(lock);
}
void HavingMutex::d(boost::mutex::scoped_lock &lock)
{
c->wait(lock);
}
void HavingMutex::notify()
{
// set_condition;
c->notify_one();
}
Also in boost example they have while cycle around wait. Wait could be interrupted in some cases by system itself, not really you got the lock.
I suggest you reconsider having all methods static with static members also. Instead create them as normal members, and create one global object.