I've came across with this sample of code provided by a book. Btw this book has bad reviews. I regret that I bought it
std::mutex m_mutex;
mutable std::unique_lock<std::mutex> m_finishedQueryLock{ m_mutex, std::defer_lock };
bool m_playerQuit{ false };
void SetPlayerQuit()
{
m_finishedQueryLock.lock();
m_playerQuit = true;
m_finishedQueryLock.unlock();
}
I'm not satisfied with the book's explanation of how it works and why should I use it. I already have an idea of how mutex locks work and the implementations of it, but I have a difficulty understanding the second line of the code above. And why does it have a mutable keyword in it?
I'm completely new on C++ Programming. So a basic level of explanation would help me a lot.
That example looks totally stupid.
The second line is declaring a non-static data member, and
the member is mutable (for reasons given below);
the member is an object of type std::unique_lock<std::mutex>, which is a helper type for locking/unlocking an associated mutex object;
the member is initialized when an instance of the class is created, by calling its constructor and passing m_mutex and the special tag std::defer_lock as arguments.
But doing that is stupid, and I'm not surprised the book has bad reviews if it has examples like that.
The point of unique_lock is to lock the associated mutex, and then to automatically unlock it when it goes out of scope. Creating a unique_lock member like this is stupid, because it doesn't go out of scope at the end of the function, so the code has absolutely no advantage over:
mutable std::mutex m_mutex;
bool m_playerQuit{ false };
void SetPlayerQuit()
{
m_mutex.lock();
m_playerQuit = true;
m_mutex.unlock();
}
But this manual unlocking has all the problems that unique_lock was designed to solve, so it should be using a scoped lock (either unique_lock or lock_guard) but only in the function scope, not as a member:
mutable std::mutex m_mutex;
bool m_playerQuit{ false };
void SetPlayerQuit()
{
std::lock_guard<std::mutex> lock(m_mutex);
m_playerQuit = true;
} // m_mutex is automatically unlocked by the ~lock_guard destructor
The mutable keyword is necessary so that you can lock the mutex in const member functions. Locking and unlocking a mutex is a non-const operation that modifies the mutex, which would not be allowed in a const member if it wasn't mutable.
The unique_lock is a RAII system. When created, taking the mutex as parameter, it will lock the mutex and when leaving the scope it is destroyed, thus unlocking the mutex. When you need to unlock earlier, you can call the unlock() function like in your example.
Using the unique_lock like in your example doesn't create added value over using the mutex directly.
All answers explain very well why the example is not good. I will answer when you would want to use a std::unique_lock instead of a std::lock_guard.
When a thread is waiting on something, one may use a std::condition_variable. A condition variable makes use of deferred locking and therefore uses a unique_lock, which allows deferred locking. I don't think you explicitly need to say you need deferred locking, as in the example.
To grasp these concepts, you first need to learn what RAII is. Then have a look at this reference. An example of the condition variables can be found here.
Related
Please consider this classical approach, I have simplified it to highlight the exact question:
#include <iostream>
#include <mutex>
using namespace std;
class Test
{
public:
void modify()
{
std::lock_guard<std::mutex> guard(m_);
// modify data
}
private:
/// some private data
std::mutex m_;
};
This is the classical approach of using std::mutex to avoid data races.
The question is why are we keeping an extra std::mutex in our class? Why can't we declare it every time before the declaration of std::lock_guard like this?
void modify()
{
std::mutex m_;
std::lock_guard<std::mutex> guard(m_);
// modify data
}
Lets say two threads are calling modify in parallel. So each thread gets its own, new mutex. So the guard has no effect as each guard is locking a different mutex. The resource you are trying to protect from race-conditions will be exposed.
The misunderstanding comes from what the mutex is and what the lock_guard is good for.
A mutex is an object that is shared among different threads, and each thread can lock and release the mutex. That's how synchronization among different threads works. So you can work with m_.lock() and m_.unlock() as well, yet you have to be very careful that all code paths (including exceptional exits) in your function actually unlocks the mutex.
To avoid the pitfall of missing unlocks, a lock_guard is a wrapper object which locks the mutex at wrapper object creation and unlocks it at wrapper object destruction. Since the wrapper object is an object with automatic storage duration, you will never miss an unlock - that's why.
A local mutex does not make sense, as it would be local and not a shared ressource. A local lock_guard perfectly makes sense, as the autmoatic storage duration prevents missing locks / unlocks.
Hope it helps.
This all depends on the context of what you want to prevent from being executed in parallel.
A mutex will work when multiple threads try to access the same mutex object. So when 2 threads try to access and acquire the lock of a mutex object, only one of them will succeed.
Now in your second example, if two threads call modify(), each thread will have its own instance of that mutex, so nothing will stop them from running that function in parallel as if there's no mutex.
So to answer your question: It depends on the context. The mission of the design is to ensure that all threads that should not be executed in parallel will hit the same mutex object at the critical part.
Synchronization of threads involves checking if there is another thread executing the critical section. A mutex is the objects that holds the state for us to check if it was "locked" by a thread. lock_guard on the other hand is a wrapper that locks the mutex on initialization and unlocks it during destruction.
Having realized that, it should be clearer why there has to be only one instance of the mutex that all lock_guards need access to - they need to check if it's clear to enter the critical section against the same object. In the second snippet of your question each function call creates a separate mutex that is seen and accessible only in its local context.
You need a mutex at class level. Otherwise, each thread has a mutex for itself, and therefore the mutex has no effect.
If for some reason you don't want your mutex to be stored in a class attribute, you could use a static mutex as shown below.
void modify()
{
static std::mutex myMutex;
std::lock_guard<std::mutex> guard(myMutex);
// modify data
}
Note that here there is only 1 mutex for all the class instances. If the mutex is stored in an attribute, you would have one mutex per class instance. Depending on your needs, you might prefer one solution or the other.
So I've some thread-unsafe function call which I need to make thread safe so I'm trying to use a std::mutex and a std::lock_guard. Currently code looks like this -
int myFunc(int value){
static std::mutex m;
std::lock_guard lock(m);
auto a = unsafe_call(value);
return a;
}
the unsafe_call() being the thread-unsafe function. I've also put static around the mutex because then the function will have different mutexes for each separate call.
What I want to know is, should I also make lock_guard static because the mutex is static one, as I was told by some coworker of mine. I don't see a reason why should I do this, because I consider lock_guard to be a simple .lock() and .unlock() call, but the suggestion from coworker is confusing me.
should I also make lock_guard static because the mutex is static one, as I was told by some coworker of mine.
Your coworker is completely wrong, lock_guard is using RAII mechanism to control resource (mutex in this case) and making it static would completely defeat it's purpose - mutex would be locked once on the first execution of the function and released only when program terminates, ie effectively you would not be using mutex at all.
I don't see a reason why should I do this, because I consider lock_guard to be a simple .lock() and .unlock() call, but the suggestion from coworker is confusing me.
That suggestion is nonsense IMO, and you're righteously confused about it.
The purpose of std::lock_guard is to make locking/unlocking the mutex guaranteed within a particular scope (and robust against exceptions). This is done using the behavior of the destructor function, that is guaranteed to be called when the variables scope is left.
For a static variable the destructor will be at best called at the end of the processes lifetime. Hence the mutex would be locked by the 1st calling thread with the whole process.
The global scope is very unlikely to be correct/make sense, as it would appear with a static lock_guard.
So your colleague is wrong.
should I also make lock_guard static because the mutex is static one, as I was told by some coworker of mine.
NO
Consider the scenario where two threads enter your function and both get the same static std::mutex m. One tries to lock it and the other one would not get chance[1]. Furthermore, when the first function exits, it won't actually unlock the mutex since lock_guard is static storage and it's scope is valid here so no destructor will be called.
[1] - std::lock_guard locks in ctor, which would be executed only once if guard is static, second thread would not even try to lock - as corrected by #Slava in comments.
It may sound dummy but,Am sort of confused, I have gone through this question,when looking into it we both where in the same situation it seems, I have to make my map as static so it will be common to all instances that will be created in separate threads and I want to synchronize the functions that gonna act on my map, so i thought of making a std::mutex as static in my class like what was suggested as an answer in the given link.. in this case will there be any race-condition occur for acquiring and locking the mutex itself? is there any better way we can synchronize the functions on static map using mutex
Does Making std::mutex as static creates race-condition for the mutex
itself
No, a Mutex isn't vulnerable to race-conditions. And as for initializing it as static, you are safe.
$6.7: 4: Dynamic initialization of a block-scope variable with static storage duration ([basic.stc.static]) or thread storage
duration ([basic.stc.thread]) is performed the first time control
passes through its declaration; such a variable is considered
initialized upon the completion of its initialization. If the
initialization exits by throwing an exception, the initialization is
not complete, so it will be tried again the next time control enters
the declaration. If control enters the declaration concurrently while
the variable is being initialized, the concurrent execution shall wait
for completion of the initialization
You said:
i thought of making a std::mutex as static in my class like what was
suggested as an answer in the given link.
Do that if you are trying to protect static class member variables as well. Otherwise, make it a mutable member. The fact that you said the map will be globally initialized as static is okay, since the mutex as a member variable, will follow suite.
class Map{
public:
Map(...){}
std::size_t size() const{
std::lock_guard<std::mutex> lck(m_m);
return m_size;
}
iterator add(....) {
std::lock_guard<std::mutex> lck(m_m);
....
return your_iterator;
}
...etc
private:
mutable std::mutex m_m; //FREE ADVICE: Use a std::recursive_mutex instead
...others
};
Now:
//Somewhere at global scope:
Map mp(... ...);
// NOTES
// 1. `mp` will be initialized in a thread safe way by the runtime.
// 2. Since you've protected all Read or Write member functions of the class `Map`,
// you are safe to call it from any function and from any thread
No.
Mutexes (and other synchronisation primitives) are implemented using support from the operating system. That's the only way that they can do their job.
A direct corollorary of their ability to perform this job is that they are themselves not prone to race conditions — locking and unlocking operations on mutexes are atomic.
Otherwise, they wouldn't be much use! Every time you used a mutex, you'd have to protect it with another mutex, then protect that mutex with another mutex, and so on and so forth until you had an infinite number of mutexes, none of them actually achieving anything of any use at all. :)
The std::mutex object having static storage duration doesn't change this in any way. Presumably you were thinking of function-static variables (that, assuming they're not already immune to race conditions, must be synchronised because they may be accessed concurrently by different threads; but still, ideally you wouldn't use them at all because they make functions not be re-entrant).
I am new to multi thread programming, yet I am studying a big project by someone else. In the code he has a singleton class and it has some public member variable and a member mutex. He used this singleton in different threads like:
singleton::instance()->mutex.lock();
singleton::instance()->value = getval();
singleton::instance()->mutex.release();
Is this the safe way to do it?
If not what is the proper way of read/write the value in singleton?
No it is not safe to do so.
The problem is that the mutex is handed out to the user. There is no guarantee that this lock will be released. For example, what happens if getval() would throw an exception ?
The proper way to do so would be to embed mutex use inside the API of your singleton. For example:
void singleton::setvalue(int val) { // example supposing value is an int
std::lock_guard<std::mutex> mylck (mutex);
value = val;
}
In this example, a local std::lock_guard is used. This object locks the mutex and unlocks it on destruction. This makes sure that in any case the mutex will be unlocked, whenever function returns and even if an exception would be thrown.
Note: If all you are doing is getting a variable like return variable; then it is safe to do even without the lock.
About the code. Assuming the lock is implemented correctly then it is safe to do anything before release is called
I would like to know which is the difference between:
boost::timed_mutex _mutex;
if(_mutex.timed_lock(boost::get_system_time() + boost::posix_time::milliseconds(10))){
exclusive code
_mutex.unlock();
}
and
boost::timed_mutex _mutex;
boost::timed_mutex::scoped_lock scoped_lock(_mutex, boost::get_system_time() + boost::posix_time::milliseconds(10));
if(scoped_lock.owns_lock()) {
exclusive code
}
I do know already that the scoped_lock makes unnecessary the call to unlock. My question refers to:
Why in the first case we call timed_lock as member function of a
mutex and in the second one we construct a lock from a mutex and a
duration.
Which one is more efficient?
The usage of boost::posix_time is okay or is recommended to use another kind,
e.g. chrono or duration?
There's a better way (faster) to try to
acquire a lock for 'x' time than the two methods above specified?
I'll try to answer your questions:
as you can read in this Document, locks are used as RAII devices for the locked state of a mutex. That is, the locks don't own the mutexes they reference. They just own the lock on the mutex. basically what it means is that you acquire the mutex lock when you initialize it's corresponding lock and release it when the lock object is destroyed.
that's why in the second example you didn't have to call timed_lock from the mutex, the scoped_lock does it for you when initialized.
I don't know if the first example is more efficient but I know for sure that the second example (the RAII scoped_lock) is much safer, it guarantees that you won't forget to unlock the mutex and ,more important, it guarantees that other people that use and modify your code won't cause any mutex related problems.
as far as I know you can't construct scoped_lock (which is basicallyunique_lock<timed_mutex>) from posix_time. I personally prefer duration.
In my opinion constructing the lock with Duration absolute time is your best option.