Is unique_lock unlocked when a function is called? - c++

Let's say I have a situation like this:
void consumer(){
unique_lock<mutex> lock(mtx);
foo();
}
void foo(){
/* does the thread still own the mutex here? */
}
I expect it does but I'm not 100% sure.

The destructor of unique_lock calls mtx.unlock(). The destructor is called at the end of the lifetime of the lock. Generally (see comments), the end of the lifetime of the lock is :
void consumer(){
unique_lock<mutex> lock(mtx);
foo();
} // <- here.
So yes, it'll still be locked.

Related

Why re-use of mutex of a condition variable will result in deadlock?

I confronted a code here when I was seeking for help of the implementation of std::condition_variable in C++ 11. In the question above, such code can be executed correctly whereas adding the comment line in function void g() results in deadlock occasionally. And I want to know why and the exactly inner mechanism of std::condition_variable::wait()(cpp reference really confuses me). Thanks in advance.
#include <thread>
#include <mutex>
#include <condition_variable>
#include <iostream>
std::mutex mtx;
std::condition_variable cv;
void f() {
{
std::unique_lock<std::mutex> lock( mtx );
cv.wait( lock );
}
std::cout << "f()\n";
}
void g() {
// std::unique_lock<std::mutex> lock( mtx ); adding this line will result in
// deadlock.
std::this_thread::sleep_for( std::chrono::seconds(1) );
cv.notify_one();
}
int main() {
for (int i = 1; i <= 100; i++) {
std::cout << i << std::endl;
std::thread t1{ f };
std::thread t2{ g };
t2.join();
t1.join();
}
}
You should associate a condition variable with an actual condition, and also account for spurious wakeups. In your example your code can deadlock if you signal the condition variable first and then proceed to sleep on the condition variable via wait().
So your code should ideally look something like the following, (where if you signal before you sleep on wait(), the changed condition will detect that you shouldn't sleep)
void f() {
{
std::unique_lock<std::mutex> lock( mtx );
while (some_boolean) {
cv.wait( lock );
}
}
std::cout << "f()\n";
}
void g() {
std::unique_lock<std::mutex> lock( mtx );
change_some_boolean();
cv.notify_one();
}
Note that it does not matter whether the lock is held when you call notify_one() in g(). You should however, make sure that you hold the lock when you change_some_boolean().
Creating a thread that runs f() before creating a thread that runs g() does not guarantee that f() will start running before g() does. When g() starts first it grabs the lock, sleeps for one second, then notify's the condition variable. Since nobody is waiting for the condition, that notify has no effect. When g() returns it releases the lock. Then f() gets the lock and calls wait(). Nobody wakes it up, and f() just keeps on waiting. This isn't a deadlock; any thread could still call notify() and wake up f().
The answer is in the example at the link you provide: Condition Variable on cppreference
// Manual unlocking is done before notifying, to avoid waking up
// the waiting thread only to block again (see notify_one for details)
lk.unlock();
cv.notify_one();
Your example however, does NOT unlock your local variable prior to using notify_one() when it should. Your g() function should look like this:
void g() {
std::unique_lock<std::mutex> lock( mtx );
std::this_thread::sleep_for( std::chrono::seconds(1) );
lock.unlock();
cv.notify_one();
}

How to best write multiple functons that use the same mutexes

Suppose I have some code that looks like this:
g_mutex
void foo()
{
g_mutex.lock();
...
g_mutex.unlock()
}
void foobar()
{
g_mutex.lock();
...
g_mutex.unlock()
foo();
g_mutex.lock();
...
g_mutex.unlock()
}
Is there a pattern I can use such that in foobar() I can just lock the mutex once?
I can think of two solutions:
1. Use std::recursive_mutex
This way there's no problem if the same thread locks the mutex more than once, you don't have to unlock it before calling the function.
Use lock_guard or unique_lock though, don't litter your code with lock/unlock pairs.
2. Make foo() take a guard as argument
Rewrite foo() like this:
void foo(lock_guard<mutex>&)
{
// do foo stuff
}
This way it's impossible to call foo() without a mutex being locked. The lock_guard object is a token saying foo() can only be called with synchronization. Of course, it's still possible to mess it up by locking an unrelated mutex (which is rare if you are implementing the methods of a class, there's only one mutex visible to be locked).
You can see more details of this approach on this Andrei's pre-C++-11 article.
You can use std::lock_guard<std::mutex>:
void foobar()
{
std::lock_guard<std::mutex> guard(g_mutex);
// ...
} // releases g_mutex automatically
Usually, you rely on the mutex being reentrant - that is, it can be locked many times by the same thread:
void foo() {
g_mutex.lock();
// do foo stuff
g_mutex.unlock();
}
void foobar() {
g_mutex.lock();
foo();
g_mutex.unlock();
}
If you don't want that for some reason, there is a messier approach but it's not recommended. This would typically be done only in a class, where you can restrict access to private functions.
void foo_private()
{
// do foo stuff with the assumption that the lock is acquired.
}
void foo() {
g_mutex.lock();
foo_private();
g_mutex.unlock();
}
void foobar() {
g_mutex.lock();
foo_private();
g_mutex.unlock();
}
Also, as stated in the other answer to your question, you should use std::lock_guard to acquire the lock, as it will correctly unlock your object in the event of an exception (or if you forget to).

Pausing std::thread until a function finishes

class Class {
public:
Class ();
private:
std::thread* updationThread;
};
Constructor:
Class::Class() {
updationThread = new std::thread(&someFunc);
}
At some point in my application, I have to pause that thread and call a function and after execution of that function I have to resume the thread. Let's say it happens here:
void Class::aFunction() {
functionToBeCalled(); //Before this, the thread should be paused
//Now, the thread should be resumed.
}
I have tried to use another thread with function functionToBeCalled() and use thread::join but was unable to do that for some reason.
How can I pause a thread or how can I use thread::join to pause a thread until the other finishes?
I don't think you can easily (in a standard way) "pause" some thread, and then resumes it. I imagine you can send SIGSTOP and SIGCONT if you are using some Unix-flavored OS, but otherwise, you should properly mark the atomic parts inside someFunc() with mutexes and locks, an wraps functionToBeCalled() with a lock on the corresponding mutex:
std::mutex m; // Global mutex, you should find a better place to put it
// (possibly in your object)
and inside the function:
void someFunc() {
// I am just making up stuff here
while(...) {
func1();
{
std::lock_guard<std::mutex> lock(m); // lock the mutex
...; // Stuff that must not run with functionToBeCalled()
} // Mutex unlocked here, by end of scope
}
}
and when calling functionToBeCalled():
void Class::aFunction() {
std::lock_guard<std::mutex> lock(m); // lock the mutex
functionToBeCalled();
} // Mutex unlocked here, by end of scope
You can use a condition variable. An example similar to your situation is given there:
http://en.cppreference.com/w/cpp/thread/condition_variable

How to use recursive QMutex

I'm trying to use a recursive QMutex, i read the QMutex Class Reference but i not understand how to do it, can someone give me an example?
I need some way to lock QMutex that can be unlocked after or before the lock method is called.
If recursive mutex is not the way is there any other way?
To create a recursive QMutex you simply pass QMutex::Recursive at construction time, for instance:
QMutex mutex(QMutex::Recursive);
int number = 6;
void method1()
{
mutex.lock();
number *= 5;
mutex.unlock();
}
void method2()
{
mutex.lock();
number *= 3;
mutex.unlock();
}
Recursive means that you can lock several times the mutex from the same thread, you don't have to unlock it. If I understood well your question that's what you want.
Be careful, if you lock recursively you must call unlock the same amount of times. A better way to lock/unlock a mutex is using a QMutexLocker
#include <QMutexLocker>
QMutex mutex(QMutex::Recursive);
int number = 6;
void method1()
{
QMutexLocker locker(&mutex); // Here mutex is locked
number *= 5;
// Here locker goes out of scope.
// When locker is destroyed automatically unlocks mutex
}
void method2()
{
QMutexLocker locker(&mutex);
number *= 3;
}
A recursive mutex can be locked multiple times from a single thread without needing to be unlocked, as long as the same number of unlock calls are made from the same thread. This mechanism comes in handy when a shared resource is used by more then one function, and one of those functions call another function in which the resource is used.
Consider the following class:
class Foo {
public:
Foo();
void bar(); // Does something to the resource
void thud(); // Calls bar() then does something else to the resource
private:
Resource mRes;
QMutex mLock;
}
An initial implementation may look something like the following:
Foo::Foo() {}
void Foo::bar() {
QMutexLocker locker(&mLock);
mRes.doSomething();
}
void Foo::thud() {
QMutexLocker locker(&mLock);
bar();
mRes.doSomethingElse();
}
The above code will DEADLOCK on calls to thud. mLock will be acquired in the first line of thud() and once again by the first line of bar() which will block waiting for thud() to release the lock.
A simple solution would be to make the lock recursive in the ctor.
Foo::Foo() : mLock(QMutex::Recursive) {}
This an OK fix, and will be suitable for many situations, however one should be aware that there may be a performance penalty to using this solution since each recursive mutex call may require a system call to identify the current thread id.
In addition to the thread id check, all calls to thud() still execute QMutex::lock() twice!
Designs which require a recursive may be able to be refactored to eliminate the need for the recursive mutex. In general, the need for a recursive mutex is a "code smell" and indicates a need to adhere to the principle of separation of concerns.
For the class Foo, one could imagine creating a private function call which performs the shared computation and keeping the resource locking at the public interface level.
class Foo {
public:
Foo();
void bar(); // Does something to the resource
void thud(); // Does something then does something else to the resource
private:
void doSomething();
private:
Resource mRes;
QMutex mLock;
}
Foo::Foo() {}
// public
void Foo::bar() {
QMutexLocker locker(&mLock);
doSomething();
}
void Foo::thud() {
QMutexLocker locker(&mLock);
doSomething();
mRes.doSomethingElse();
}
// private
void Foo::doSomething() {
mRes.doSomething(); // Notice - no mutex in private function
}
Recursive mode just means that if a thread owns a mutex, and the same thread tries to lock the mutex again, that will succeed. The requirement is that calls to lock/unlock are balanced.
In non recursive mode, this will result in a deadlock.

Do I need to call unlock() in the Boost thread function?

I have the following lines of code where I used C++ Boost thread:
void threadFunc()
{
boost::mutex::scoped_lock lock(m_Mutex);
//some code here...
condition.notify_one();
}
So should I call unlock() function before the last line, like the following? What is the difference if I don't call unlock()?
void threadFunc()
{
boost::mutex::scoped_lock lock(m_Mutex);
//some code here...
lock.unlock();
condition.notify_one();
}
No -- the point of the scoped_lock class is that the lock is tied to the scope -- i.e., when the scoped_lock object goes out of scope, the lock is automatically released. This assures (for example) that if any of the intervening code throws an exception, the lock will still be released.
No. The lock is scoped, so it unlocks "automatically" as it goes out of scope. See RAII.
http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization