I am new to multi thread programming, yet I am studying a big project by someone else. In the code he has a singleton class and it has some public member variable and a member mutex. He used this singleton in different threads like:
singleton::instance()->mutex.lock();
singleton::instance()->value = getval();
singleton::instance()->mutex.release();
Is this the safe way to do it?
If not what is the proper way of read/write the value in singleton?
No it is not safe to do so.
The problem is that the mutex is handed out to the user. There is no guarantee that this lock will be released. For example, what happens if getval() would throw an exception ?
The proper way to do so would be to embed mutex use inside the API of your singleton. For example:
void singleton::setvalue(int val) { // example supposing value is an int
std::lock_guard<std::mutex> mylck (mutex);
value = val;
}
In this example, a local std::lock_guard is used. This object locks the mutex and unlocks it on destruction. This makes sure that in any case the mutex will be unlocked, whenever function returns and even if an exception would be thrown.
Note: If all you are doing is getting a variable like return variable; then it is safe to do even without the lock.
About the code. Assuming the lock is implemented correctly then it is safe to do anything before release is called
Related
Please consider this classical approach, I have simplified it to highlight the exact question:
#include <iostream>
#include <mutex>
using namespace std;
class Test
{
public:
void modify()
{
std::lock_guard<std::mutex> guard(m_);
// modify data
}
private:
/// some private data
std::mutex m_;
};
This is the classical approach of using std::mutex to avoid data races.
The question is why are we keeping an extra std::mutex in our class? Why can't we declare it every time before the declaration of std::lock_guard like this?
void modify()
{
std::mutex m_;
std::lock_guard<std::mutex> guard(m_);
// modify data
}
Lets say two threads are calling modify in parallel. So each thread gets its own, new mutex. So the guard has no effect as each guard is locking a different mutex. The resource you are trying to protect from race-conditions will be exposed.
The misunderstanding comes from what the mutex is and what the lock_guard is good for.
A mutex is an object that is shared among different threads, and each thread can lock and release the mutex. That's how synchronization among different threads works. So you can work with m_.lock() and m_.unlock() as well, yet you have to be very careful that all code paths (including exceptional exits) in your function actually unlocks the mutex.
To avoid the pitfall of missing unlocks, a lock_guard is a wrapper object which locks the mutex at wrapper object creation and unlocks it at wrapper object destruction. Since the wrapper object is an object with automatic storage duration, you will never miss an unlock - that's why.
A local mutex does not make sense, as it would be local and not a shared ressource. A local lock_guard perfectly makes sense, as the autmoatic storage duration prevents missing locks / unlocks.
Hope it helps.
This all depends on the context of what you want to prevent from being executed in parallel.
A mutex will work when multiple threads try to access the same mutex object. So when 2 threads try to access and acquire the lock of a mutex object, only one of them will succeed.
Now in your second example, if two threads call modify(), each thread will have its own instance of that mutex, so nothing will stop them from running that function in parallel as if there's no mutex.
So to answer your question: It depends on the context. The mission of the design is to ensure that all threads that should not be executed in parallel will hit the same mutex object at the critical part.
Synchronization of threads involves checking if there is another thread executing the critical section. A mutex is the objects that holds the state for us to check if it was "locked" by a thread. lock_guard on the other hand is a wrapper that locks the mutex on initialization and unlocks it during destruction.
Having realized that, it should be clearer why there has to be only one instance of the mutex that all lock_guards need access to - they need to check if it's clear to enter the critical section against the same object. In the second snippet of your question each function call creates a separate mutex that is seen and accessible only in its local context.
You need a mutex at class level. Otherwise, each thread has a mutex for itself, and therefore the mutex has no effect.
If for some reason you don't want your mutex to be stored in a class attribute, you could use a static mutex as shown below.
void modify()
{
static std::mutex myMutex;
std::lock_guard<std::mutex> guard(myMutex);
// modify data
}
Note that here there is only 1 mutex for all the class instances. If the mutex is stored in an attribute, you would have one mutex per class instance. Depending on your needs, you might prefer one solution or the other.
So I've some thread-unsafe function call which I need to make thread safe so I'm trying to use a std::mutex and a std::lock_guard. Currently code looks like this -
int myFunc(int value){
static std::mutex m;
std::lock_guard lock(m);
auto a = unsafe_call(value);
return a;
}
the unsafe_call() being the thread-unsafe function. I've also put static around the mutex because then the function will have different mutexes for each separate call.
What I want to know is, should I also make lock_guard static because the mutex is static one, as I was told by some coworker of mine. I don't see a reason why should I do this, because I consider lock_guard to be a simple .lock() and .unlock() call, but the suggestion from coworker is confusing me.
should I also make lock_guard static because the mutex is static one, as I was told by some coworker of mine.
Your coworker is completely wrong, lock_guard is using RAII mechanism to control resource (mutex in this case) and making it static would completely defeat it's purpose - mutex would be locked once on the first execution of the function and released only when program terminates, ie effectively you would not be using mutex at all.
I don't see a reason why should I do this, because I consider lock_guard to be a simple .lock() and .unlock() call, but the suggestion from coworker is confusing me.
That suggestion is nonsense IMO, and you're righteously confused about it.
The purpose of std::lock_guard is to make locking/unlocking the mutex guaranteed within a particular scope (and robust against exceptions). This is done using the behavior of the destructor function, that is guaranteed to be called when the variables scope is left.
For a static variable the destructor will be at best called at the end of the processes lifetime. Hence the mutex would be locked by the 1st calling thread with the whole process.
The global scope is very unlikely to be correct/make sense, as it would appear with a static lock_guard.
So your colleague is wrong.
should I also make lock_guard static because the mutex is static one, as I was told by some coworker of mine.
NO
Consider the scenario where two threads enter your function and both get the same static std::mutex m. One tries to lock it and the other one would not get chance[1]. Furthermore, when the first function exits, it won't actually unlock the mutex since lock_guard is static storage and it's scope is valid here so no destructor will be called.
[1] - std::lock_guard locks in ctor, which would be executed only once if guard is static, second thread would not even try to lock - as corrected by #Slava in comments.
I did find plenty of questions regarding singletons and thread safety, but none that quite answered this question for me... but I apologize if it is a repeat.
If I have a singleton object which will be used by multiple threads, I understand that any mutation to member variables should be carefully considered, but what about variables that are local to a method?
Consider this psuedo-code:
class Singleton [assume this has all the trappings of a proper singleton]
{
int doSomething() {
SomeObject obj;
obj.doStuff();
return obj.result();
}
}
In this case, is the local 'obj' thread safe? Does each thread get its own copy of it, even though there is only one object of the Singleton class?
Yes, obj is unique per thread.
There could be threading issues however if it accessed & modified common data - for example doStuff or result accesses a static member of SomeObject or some global.
As Luchian said you are fine so far, however, if you have a static or any & or * variable, then try to use a mutex lock or a spin lock. Mutex locks and spin locks exist in all unix based systems, and I think you can use them in windows as well, but you need first to add them somehow.
Here is a link for pthread mutexes: http://www.thegeekstuff.com/2012/05/c-mutex-examples/
And here another one for windows: http://msdn.microsoft.com/en-us/library/windows/desktop/ms686927(v=vs.85).aspx
http://doc.qt.io/archives/qt-4.7/qmutexlocker.html
This class locks the mutex in its constructor, so if an error occurs while mutex creation, will we be able to know what error was it (constructor doesn't return anything)?
Is this a disadvantage somehow?
Am I missing a point here?
QMutexLocker takes a pointer to (and deals with) a QMutex object - not a pthread_mutex_t object (even if a QMutex might be implemented on top of a pthread_mutex_t).
Locking/unlocking a QMutex object doesn't return any kind of error code (QMutex::lock() and QMutex::unlock() return void).
Any errors that might occur at the lower "pthread-level" will either be handled internally by the QMutex object, result in a C++ exception, or result in a defect (such as deadlock) in your code (for example if you try to recursively acquire a QMutex that's non-recursive).
You may be confusing mutexes and locks. The mutex is the shared synchronisation object. Locks are local objects, local to each execution context, which effect synchronisation by means of locking the common mutex. Thus the mutex already has to exist in order for a lock to make sense:
Foo sharedData; // \ global/
QMutex sharedDataMX; // / shared
void run_me_many_times()
{
QMutexLocker lk(&sharedDataMX);
// access "sharedData"
}
Suppose I have a function that tries to protect a global counter using this code:
static MyCriticalSectionWrapper lock;
lock.Enter();
counter = ++m_counter;
lock.Leave();
IS there a chance that two threads will invoke the lock's constructor? What is the safe way to achieve this goal?
The creation of the lock object itself is not thread safe. Depending on the compiler, you might have multiple independent lock objects created if multiple threads enter the function at (nearly) the same time.
The solution to this problem is to use:
OS guaranteed one time intialization (for the lock object)
Double-checked locking (Assuming it is safe for your particular case)
A thread safe singleton for the lock object
For your specific example, you may be able to use a thread safe interlocked (e.g., the InterlockedIncrement() function for Windows) operation for the increment and avoid locking altogether
Constructor invoke can be implementation and/or execution environment dependent, but this isn't a scoped_lock so not an issue.
Main operation is properly guarded against multi thread access I think.
(You know, global for global, function static for function static. That lock variable must be defined in the same scope with the guarded object.)
Original sample code:
static MyCriticalSectionWrapper lock;
lock.Enter();
counter = ++m_counter;
lock.Leave();
I realize that the counter code is probably just a placeholder, however if it is actually what you trying to do you could use the Windows function "InterlockedIncrement()" to accomplish this.
Example:
// atomic increment for thread safety
InterlockedIncrement(&m_counter);
counter = m_counter;
That depends on your lock implementation.