Do I need to call unlock() in the Boost thread function? - c++

I have the following lines of code where I used C++ Boost thread:
void threadFunc()
{
boost::mutex::scoped_lock lock(m_Mutex);
//some code here...
condition.notify_one();
}
So should I call unlock() function before the last line, like the following? What is the difference if I don't call unlock()?
void threadFunc()
{
boost::mutex::scoped_lock lock(m_Mutex);
//some code here...
lock.unlock();
condition.notify_one();
}

No -- the point of the scoped_lock class is that the lock is tied to the scope -- i.e., when the scoped_lock object goes out of scope, the lock is automatically released. This assures (for example) that if any of the intervening code throws an exception, the lock will still be released.

No. The lock is scoped, so it unlocks "automatically" as it goes out of scope. See RAII.
http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization

Related

Is unique_lock unlocked when a function is called?

Let's say I have a situation like this:
void consumer(){
unique_lock<mutex> lock(mtx);
foo();
}
void foo(){
/* does the thread still own the mutex here? */
}
I expect it does but I'm not 100% sure.
The destructor of unique_lock calls mtx.unlock(). The destructor is called at the end of the lifetime of the lock. Generally (see comments), the end of the lifetime of the lock is :
void consumer(){
unique_lock<mutex> lock(mtx);
foo();
} // <- here.
So yes, it'll still be locked.

Is using unique_lock in new scope equivalent to unlock call at the end of work with shared resource?

I have seen a lot of examples of code when developer uses std::unique_lock in new scope for automatically unlocking mutex:
...
// do some staff
{
std::unique_lock<std::mutex> lock(shared_resource_mutex);
// do some actions with shared resource
}
// do some staff
...
In my opinion it would be better to implement this behaviour using method unlock from std::unique_lock API in this way:
...
// do some actions
std::unique_lock<std::mutex> lock(shared_resource_mutex);
// do some actions with shared resource
lock.unlock();
// do some actions
...
Are these two fragments of code equivalent? For what purpose developers use the first variant? Maybe to emphasize (using parentheses) code that can not be executed parallel?
When the object is destroyed at the end of the scope, the lock is released. That's the whole point of RAII.
The good thing about using RAII is that you cannot forget to unlock and it doesn't matter how you leave the scope. If an exception is thrown for example, the lock will still be released.
If all you need is lock at construction and unlock at destruction, then std::scoped_lock is an even simpler/more appropriate class to use though.
I would say the former method is safer, more consistent and easier to read.
First consider safety:
void function()
{
std::unique_lock<std::shared_mutex> lock(mtx);
// exclusive lock stuff
lock.unlock();
// std::shared_lock<std::shared_mutex> lock(mtx); // whoops name in use
std::shared_lock<std::shared_mutex> lock2(mtx);
// read only shared lock stuff here
lock2.unlock(); // what if I forget to do this?
lock.lock(); // if I forgot to call lock2.unlock() undefined behavior
// back to the exclusive stuff
lock.unlock();
}
If you have different locks to acquire/release and you forget to call unlock() then you may end up trying to lock the same mutex twice from the same thread.
That is undefined behavior so it may go unnoticed but cause trouble.
And what if you call either lock() or unlock() on the wrong lock variable.... (say on lock2 rather than lock1?) the possibilities are frightening.
Consistency:
Also, not all lock types have a .unlock() function (std::scoped_lock, std::lock_guard) so it is good to be consistent with your coding style.
Easier to read:
It is also easier to see what code sections use locks which makes reasoning on the code simpler:
void function()
{
{
std::unique_lock<std::shared_mutex> lock(mtx);
// exclusive lock stuff
}
{
std::shared_lock<std::shared_mutex> lock(mtx);
// read only shared lock stuff here
}
{
std::unique_lock<std::shared_mutex> lock(mtx);
// back to the exclusive stuff
}
}
Both of your approaches are correct, and you might choose either of them depending on circumstance. For example, when using a condition_variable/lock combination it's often useful to be able to explicitly lock and unlock the lock.
Here's another approach that I find to be both expressive and safe:
#include <mutex>
template<class Mutex, class Function>
decltype(auto) with_lock(Mutex& m, Function&& f)
{
std::lock_guard<Mutex> lock(m);
return f();
}
std::mutex shared_resource_mutex;
void something()
{
with_lock(shared_resource_mutex, [&]
{
// some actions
});
// some other actions
}

Why C++ concurrency in action listing_6.1 does not use std::recursive_mutex

I am reading the book "C++ Concurrency In Action" and have some question about the mutex used in listing 6.1, the code snippet is below:
void pop(T& value)
{
std::lock_guard<std::mutex> lock(m);
if(data.empty()) throw empty_stack();
value=std::move(data.top());
data.pop();
}
bool empty() const
{
std::lock_guard<std::mutex> lock(m);
return data.empty();
}
The pop method locks the mutex and then calls the empty mutex. But the mutex is not a recursive_mutex, and the code works properly. So I doubt what is the actually difference between std::mutex and std::recursive_mutex.
It is calling data.empty() which seems like a function from a data member. Not the same as the empty function you show.
If it were, this would be a recursive call
bool empty() const
{
std::lock_guard<std::mutex> lock(m);
return data.empty();
}
and nothing would work.
well, recursive_mutex is for... recursive function!
In some Operating systems, locking the same mutex twice can lead to a system error( in which, the lock may be released copmletely, application may crash and actually all kind of weird and undefined behaviour may occur).
look at this (silly example)
void recursivePusher(int x){
if (x>10){
return;
}
std::lock_guard<std::mutex> lock(m);
queue.push(x);
recursivePusher(x+1);
}
this function recursivly increments x and pushes it into some shared queue.
as we talked above - the same lock may not be locked twice by the same thread, but we do need to make sure the shared queue isn't baing altered by mutilple threads.
one easy solution is to move the lociking outside the recursive function, but what happens if we can't do it? what happens if the function called is the only one that can lock the shared resource?
for example, my calling function may look like this:
switch(option){
case case1: recursivly_manipulate_shared_array(); break;
case case2: recursivly_manipulate_shared_queue(); break;
case case3: recursivly_manipulate_shared_map(); break;
}
ofcourse, you wouldn't lock all three(shred_Array,shared_map,shared_queue) only for one of them will be altered.
the solution is to use std::shared_mutex :
void recursivePusher(int x){
if (x>10){
return;
}
std::lock_guard<std::recursive_mutex> lock(m);
queue.push(x);
recursivePusher(x+1);
}
if the same thread don't need to lock the mutex recursivly it should use regular std::mutex, like in your example.
PS. in your snippet, empty is not the same as T::empty.
calling data.empty() doesn't call empty recursivley.

make function exception-safe

In my multithreaded server I have somefunction(), which needs to protect two independent of each other global data using EnterCriticalSection.
somefunction()
{
EnterCriticalSection(&g_List);
...
EnterCriticalSection(&g_Variable);
...
LeaveCriticalSection(&g_Variable);
...
LeaveCriticalSection(&g_List);
}
Following the advice of better programmers i'm going to use a RAII wrapper. For example:
class Locker
{
public:
Locker(CSType& cs): m_cs(cs)
{
EnterCriticalSection(&m_cs);
}
~Locker()
{
LeaveCriticalSection(&m_cs);
}
private:
CSType& m_cs;
}
My question: Is it ok to transform somefunction() to this?
(putting 2 Locker in one function):
somefunction()
{
// g_List,g_Variable previously initialized via InitializeCriticalSection
Locker lock(g_List);
Locker lock(g_Variable);
...
...
}
?
Your current solution has potential dead lock case. If you have two (or more) CSTypes which will be locked in different order this way, you will end up in dead lock. Best way would be to lock them both atomically. You can see an example of this in boost thread library. shared_lock and unique_lock can be used in deferred mode so that first you prepare all raii objects for all mutex objects, and then lock them all atomically in one call to lock function.
As long as you keep lock order the same in your threads its OK. Do you really need to lock them both at the same time? Also with scoped lock you can add scopes to control when to unlock, something like this:
{
// use inner scopes to control lock duration
{
Locker lockList (g_list);
// do something
} // unlocked at the end
Locker lockVariable (g_variable);
// do something
}

Pausing std::thread until a function finishes

class Class {
public:
Class ();
private:
std::thread* updationThread;
};
Constructor:
Class::Class() {
updationThread = new std::thread(&someFunc);
}
At some point in my application, I have to pause that thread and call a function and after execution of that function I have to resume the thread. Let's say it happens here:
void Class::aFunction() {
functionToBeCalled(); //Before this, the thread should be paused
//Now, the thread should be resumed.
}
I have tried to use another thread with function functionToBeCalled() and use thread::join but was unable to do that for some reason.
How can I pause a thread or how can I use thread::join to pause a thread until the other finishes?
I don't think you can easily (in a standard way) "pause" some thread, and then resumes it. I imagine you can send SIGSTOP and SIGCONT if you are using some Unix-flavored OS, but otherwise, you should properly mark the atomic parts inside someFunc() with mutexes and locks, an wraps functionToBeCalled() with a lock on the corresponding mutex:
std::mutex m; // Global mutex, you should find a better place to put it
// (possibly in your object)
and inside the function:
void someFunc() {
// I am just making up stuff here
while(...) {
func1();
{
std::lock_guard<std::mutex> lock(m); // lock the mutex
...; // Stuff that must not run with functionToBeCalled()
} // Mutex unlocked here, by end of scope
}
}
and when calling functionToBeCalled():
void Class::aFunction() {
std::lock_guard<std::mutex> lock(m); // lock the mutex
functionToBeCalled();
} // Mutex unlocked here, by end of scope
You can use a condition variable. An example similar to your situation is given there:
http://en.cppreference.com/w/cpp/thread/condition_variable