Should mutexes be mutable? - c++

Not sure if this is a style question, or something that has a hard rule...
If I want to keep the public method interface as const as possible, but make the object thread safe, should I use mutable mutexes? In general, is this good style, or should a non-const method interface be preferred? Please justify your view.

The hidden question is: where do you put the mutex protecting your class?
As a summary, let's say you want to read the content of an object which is protected by a mutex.
The "read" method should be semantically "const" because it does not change the object itself. But to read the value, you need to lock a mutex, extract the value, and then unlock the mutex, meaning the mutex itself must be modified, meaning the mutex itself can't be "const".
If the mutex is external
Then everything's ok. The object can be "const", and the mutex don't need to be:
Mutex mutex ;
int foo(const Object & object)
{
Lock<Mutex> lock(mutex) ;
return object.read() ;
}
IMHO, this is a bad solution, because anyone could reuse the mutex to protect something else. Including you. In fact, you will betray yourself because, if your code is complex enough, you'll just be confused about what this or that mutex is exactly protecting.
I know: I was victim of that problem.
If the mutex is internal
For encapsulation purposes, you should put the mutex as near as possible from the object it's protecting.
Usually, you'll write a class with a mutex inside. But sooner or later, you'll need to protect some complex STL structure, or whatever thing written by another without mutex inside (which is a good thing).
A good way to do this is to derive the original object with an inheriting template adding the mutex feature:
template <typename T>
class Mutexed : public T
{
public :
Mutexed() : T() {}
// etc.
void lock() { this->m_mutex.lock() ; }
void unlock() { this->m_mutex.unlock() ; } ;
private :
Mutex m_mutex ;
}
This way, you can write:
int foo(const Mutexed<Object> & object)
{
Lock<Mutexed<Object> > lock(object) ;
return object.read() ;
}
The problem is that it won't work because object is const, and the lock object is calling the non-const lock and unlock methods.
The Dilemma
If you believe const is limited to bitwise const objects, then you're screwed, and must go back to the "external mutex solution".
The solution is to admit const is more a semantic qualifier (as is volatile when used as a method qualifier of classes). You are hiding the fact the class is not fully const but still make sure provide an implementation that keeps the promise that the meaningful parts of the class won't be changed when calling a const method.
You must then declare your mutex mutable, and the lock/unlock methods const:
template <typename T>
class Mutexed : public T
{
public :
Mutexed() : T() {}
// etc.
void lock() const { this->m_mutex.lock() ; }
void unlock() const { this->m_mutex.unlock() ; } ;
private :
mutable Mutex m_mutex ;
}
The internal mutex solution is a good one IMHO: Having to objects declared one near the other in one hand, and having them both aggregated in a wrapper in the other hand, is the same thing in the end.
But the aggregation has the following pros:
It's more natural (you lock the object before accessing it)
One object, one mutex. As the code style forces you to follow this pattern, it decreases deadlock risks because one mutex will protect one object only (and not multiple objects you won't really remember), and one object will be protected by one mutex only (and not by multiple mutex that needs to be locked in the right order)
The mutexed class above can be used for any class
So, keep your mutex as near as possible to the mutexed object (e.g. using the Mutexed construct above), and go for the mutable qualifier for the mutex.
Edit 2013-01-04
Apparently, Herb Sutter have the same viewpoint: His presentation about the "new" meanings of const and mutable in C++11 is very enlightening:
http://herbsutter.com/2013/01/01/video-you-dont-know-const-and-mutable/

[Answer edited]
Basically using const methods with mutable mutexes is a good idea (don't return references by the way, make sure to return by value), at least to indicate they do not modify the object. Mutexes should not be const, it would be a shameless lie to define lock/unlock methods as const...
Actually this (and memoization) are the only fair uses I see of the mutable keyword.
You could also use a mutex which is external to your object: arrange for all your methods to be reentrant, and have the user manage the lock herself : { lock locker(the_mutex); obj.foo(); } is not that hard to type, and
{
lock locker(the_mutex);
obj.foo();
obj.bar(42);
...
}
has the advantage it doesn't require two mutex locks (and you are guaranteed the state of the object did not change).

Related

Is the execution of a c++ 11 atomic object also atomic ?

I have a object and all its function should be executed in sequential order.
I know it is possible to do that with a mutex like
#include <mutex>
class myClass {
private:
std::mutex mtx;
public:
void exampleMethod();
};
void myClass::exampleMethod() {
std::unique_lock<std::mutex> lck (mtx); // lock mutex until end of scope
// do some stuff
}
but with this technique a deadlock occurs after calling an other mutex locked method within exampleMethod.
so i'm searching for a better resolution.
the default std::atomic access is sequentially consistent, so its not possible to read an write to this object at the same time, but now when i access my object and call a method, is the whole function call also atomic or more something like
object* obj = atomicObj.load(); // read atomic
obj.doSomething(); // call method from non atomic object;
if yes is there a better way than locking the most functions with a mutex ?
Stop and think about when you actually need to lock a mutex. If you have some helper function that is called within many other functions, it probably shouldn't try to lock the mutex, because the caller already will have.
If in some contexts it is not called by another member function, and so does need to take a lock, provide a wrapper function that actually does that. It is not uncommon to have 2 versions of member functions, a public foo() and a private fooNoLock(), where:
public:
void foo() {
std::lock_guard<std::mutex> l(mtx);
fooNoLock();
}
private:
void fooNoLock() {
// do stuff that operates on some shared resource...
}
In my experience, recursive mutexes are a code smell that indicate the author hasn't really got their head around the way the functions are used - not always wrong, but when I see one I get suspicious.
As for atomic operations, they can really only be applied for small arithmetic operations, say incrementing an integer, or swapping 2 pointers. These operations are not automatically atomic, but when you use atomic operations, these are the sorts of things they can be used for. You certainly can't have any reasonable expectations about 2 separate operations on a single atomic object. Anything could happen in between the operations.
You could use a std::recursive_mutex instead. This will allow a thread that already owns to mutex to reacquire it without blocking. However, if another thread tries to acquire the lock it will block.
As #BoBTFish properly indicated, it is better to separate your class's public interface, which member functions acquire non-recursive lock and then call private methods which don't. Your code must then assume a lock is always held when a private method is run.
To be safe on this, you may add a reference to std::unique_lock<std::mutex> to each of the method that requires the lock to be held.
Thus, even if you happen to call one private method from another, you would need to make sure a mutex is locked before execution:
class myClass
{
std::mutex mtx;
//
void i_exampleMethod(std::unique_lock<std::mutex> &)
{
// execute method
}
public:
void exampleMethod()
{
std::unique_lock<std::mutex> lock(mtx);
i_exampleMethod(lock);
}
};

How can I create a smart pointer that locks and unlocks a mutex?

I have a threaded class from which I would like to occasionally acquire a pointer an instance variable. I would like this access to be guarded by a mutex so that the thread is blocked from accessing this resource until the client is finished with its pointer.
My initial approach to this is to return a pair of objects: one a pointer to the resource and one a shared_ptr to a lock object on the mutex. This shared_ptr holds the only reference to the lock object so the mutex should be unlocked when it goes out of scope. Something like this:
void A::getResource()
{
Lock* lock = new Lock(&mMutex);
return pair<Resource*, shared_ptr<Lock> >(
&mResource,
shared_ptr<Lock>(lock));
}
This solution is less than ideal because it requires the client to hold onto the entire pair of objects. Behaviour like this breaks the thread safety:
Resource* r = a.getResource().first;
In addition, my own implementation of this is deadlocking and I'm having difficulty determining why, so there may be other things wrong with it.
What I would like to have is a shared_ptr that contains the lock as an instance variable, binding it with the means to access the resource. This seems like something that should have an established design pattern but having done some research I'm surprised to find it quite hard to come across.
My questions are:
Is there a common implementation of this pattern?
Are there issues with putting a mutex inside a shared_ptr that I'm overlooking that prevent this pattern from being widespread?
Is there a good reason not to implement my own shared_ptr class to implement this pattern?
(NB I'm working on a codebase that uses Qt but unfortunately cannot use boost in this case. However, answers involving boost are still of general interest.)
I'm not sure if there are any standard implementations, but since I like re-implementing stuff for no reason, here's a version that should work (assuming you don't want to be able to copy such pointers):
template<class T>
class locking_ptr
{
public:
locking_ptr(T* ptr, mutex* lock)
: m_ptr(ptr)
, m_mutex(lock)
{
m_mutex->lock();
}
~locking_ptr()
{
if (m_mutex)
m_mutex->unlock();
}
locking_ptr(locking_ptr<T>&& ptr)
: m_ptr(ptr.m_ptr)
, m_mutex(ptr.m_mutex)
{
ptr.m_ptr = nullptr;
ptr.m_mutex = nullptr;
}
T* operator ->()
{
return m_ptr;
}
T const* operator ->() const
{
return m_ptr;
}
private:
// disallow copy/assignment
locking_ptr(locking_ptr<T> const& ptr)
{
}
locking_ptr& operator = (locking_ptr<T> const& ptr)
{
return *this;
}
T* m_ptr;
mutex* m_mutex; // whatever implementation you use
};
You're describing a variation of the EXECUTE AROUND POINTER pattern, described by Kevlin Henney in Executing Around Sequences.
I have a prototype implementation at exec_around.h but I can't guarantee it works correctly in all cases as it's a work in progress. It includes a function mutex_around which creates an object and wraps it in a smart pointer that locks and unlocks a mutex when accessed.
There is another approach here. Far less flexible and less generic, but also far simpler. While it still seems to fit your exact scenario.
shared_ptr (both standard and Boost) offers means to construct it while providing another shared_ptr instance which will be used for usage counter and some arbitrary pointer that will not be managed at all. On cppreference.com it is the 8th form (the aliasing constructor).
Now, normally, this form is used for conversions - like providing a shared_ptr to base class object from derived class object. They share ownership and usage counter but (in general) have two different pointer values of different types. This form is also used to provide a shared_ptr to a member value based on shared_ptr to object that it is a member of.
Here we can "abuse" the form to provide lock guard. Do it like this:
auto A::getResource()
{
auto counter = std::make_shared<Lock>(&mMutex);
std::shared_ptr<Resource> result{ counter, &mResource };
return result;
}
The returned shared_ptr points to mResource and keeps mMutex locked for as long as it is used by anyone.
The problem with this solution is that it is now your responsibility to ensure that the mResource remains valid (in particular - it doesn't get destroyed) for that long as well. If locking mMutex is enough for that, then you are fine.
Otherwise, above solution must be adjusted to your particular needs. For example, you might want to have the counter a simple struct that keeps both the Lock and another shared_ptr to the A object owning the mResource.
To add to Adam Badura's answer, for a more general case using std::mutex and std::lock_guard, this worked for me:
auto A::getResource()
{
auto counter = std::make_shared<std::lock_guard<std::mutex>>(mMutex);
std::shared_ptr<Resource> ptr{ counter, &mResource} ;
return ptr;
}
where the lifetimes of std::mutex mMutex and Resource mResource are managed by some class A.

Should I use const to make objects thread-safe?

I wrote a class which instances may be accessed by several threads. I used a trick to remember users they have to lock the object before using it. It involves keeping only const instances. When in the need to read or modify sensitive data, other classes should call a method (which is const, thus allowed) to get a non-const version of the locked object. Actually it returns a proxy object containing a pointer to the non-const object and a scoped_lock, so it unlocks the object when going out of scope. The proxy object also overloads operator-> so the access to the object is transparent.
This way, shooting onself's foot by accessing unlocked objects is harder (there is always const_cast).
"Clever tricks" should be avoided, and this smells bad anyway.
Is this design really bad ?
What else can I or should I do ?
Edit: Getters are non-const to enforce locking.
Basic problem: a non-const reference may exist elsewhere. If that gets written safely, it does not follow that it can be read safely -- you may look at an intermediate state.
Also, some const methods might (legitimately) modify hidden internal details in a thread-unsafe way.
Analyse what you're actually doing to the object and find an appropriate synchronisation mode.
If your clever container really does know enough about the objects to control all their synchronisation via proxies, then make those objects private inner classes.
This is clever, but unfortunately doomed to fail.
The problem, underlined by spraff, is that you protect against reads but not against writes.
Consider the following sequence:
unsigned getAverageSalary(Employee const& e) {
return e.paid() / e.hired_duration();
}
What happens if we increment paid between the two function calls ? We get an incoherent value.
The problem is that your scheme does not explicitly enforce locking for reads.
Consider the alternative of a Proxy pattern: The object itself is a bundle of data, all privates. Only a Proxy class (friend) can read/write its data, and when initializing the Proxy it grabs the lock (on the mutex of the object) automatically.
class Data {
friend class Proxy;
Mutex _mutex;
int _bar;
};
class Proxy {
public:
Proxy(Data& data): _lock(data._mutex), _data(data) {}
int bar() const { return _data._bar; }
void bar(int b) { _data._bar = b; }
private:
Proxy(Proxy const&) = delete; // disable copy
Lock _lock;
Data& _data;
};
If I wanted to do what you are doing, I would do one of the following.
Method 1:
shared_mutex m; // somewhere outside the class
class A
{
private:
int variable;
public:
void lock() { m.lock(); }
void unlock() { m.unlock(); }
bool is_locked() { return m.is_locked(); }
bool write_to_var(int newvalue)
{
if (!is_locked())
return false;
variable = newvalue;
return true;
}
bool read_from_var(int *value)
{
if (!is_locked() || value == NULL)
return false;
*value = variable;
return true;
}
};
Method 2:
shared_mutex m; // somewhere outside the class
class A
{
private:
int variable;
public:
void write_to_var(int newvalue)
{
m.lock();
variable = newvalue;
m.unlock();
}
int read_from_var()
{
m.lock();
int to_return = variable;
m.unlock();
return to_return;
}
};
The first method is more efficient (not locking-unlocking all the time), however, the program may need to keep checking the output of every read and write to see if they were successful. The second method automatically handles the locking and so the programmer wouldn't even know the lock is there.
Note: This is not code for copy-paste. It shows a concept and sketches how it's done. Please don't comment saying you forgot some error checking somewhere.
This sounds a lot like Alexandrescu's idea with volatile. You're not
using the actual semantics of const, but rather exploiting the way the
type system uses it. In this regard, I would prefer Alexandrescu's use
of volatile: const has very definite and well understood semantics,
and subverting them will definitely cause confusion for anyone reading
or maintaining the code. volatile is more appropriate, as it has no
well defined semantics, and in the context of most applications, is not
used for anything else.
And rather than returning a classical proxy object, you should return a
smart pointer. You could actually use shared_ptr for this, grabbing
the lock before returning the value, and releasing it in the deleter
(rather than deleting the object); I rather fear, however, that this
would cause some confusion amongst the readers, and I would probably go
with a custom smart pointer (probably using shared_ptr with the custom
deleter in the implementation). (From your description, I suspect that
this is closer to what you had in mind anyway.)

C++ mutex and const correctness

Is there convention regarding whenever method which is essentially read-only, but has mutex/ lock which may need to be modified, is const or not?
if there is not one, what would be disadvantage/bad design if such method is const
Thank you
You can mark data members with the keyword mutable to allow them to be modified in a constant member function, e.g.:
struct foo
{
mutable mutex foo_mutex;
// ....
void bar() const
{
auto_locker lock(foo_mutex);
// ...
}
};
Try to do this as little as possible because abusing mutable is evil.
I'm generally OK with mutable locks and caches for methods that are conceptually const.
Especially in the case of caching the result of a calculation for performance. That's strictly an implementation detail that shouldn't be of concern to the callers, so removing the const designation would be tantamount to a small leak in the abstraction.
With locks, I'd ask myself if the lock is just a private implementation detail. If the lock is shared with other objects, then it's actually part of the interface.
On some platforms, locks are accessed through handles, so you can use const on the method without worrying about mutable.

Any comments to this (hopefully) 100% safe double-check-locking replacement for singletons

As Scott Meyers and Andrei Alexandrescu outlined in this article the simple try to implement the double-check locking implementation is unsafe in C++ specifically and in general on multi-processor systems without using memory barriers.
I was thinking a little bit about that and came to a solution that avoids using memory barriers and should also work 100% safe in C++. The trick is to store a copy of the pointer to the instance thread-local so each thread has to acquire the lock for the first times it access the singleton.
Here is a little sample code (syntax not checked; I used pthread but all other threading libs could be used):
class Foo
{
private:
Helper *helper;
pthread_key_t localHelper;
pthread_mutex_t mutex;
public:
Foo()
: helper(NULL)
{
pthread_key_create(&localHelper, NULL);
pthread_mutex_init(&mutex);
}
~Foo()
{
pthread_key_delete(&localHelper);
pthread_mutex_destroy(&mutex);
}
Helper *getHelper()
{
Helper *res = pthread_getspecific(localHelper);
if (res == NULL)
{
pthread_mutex_lock(&mutex);
if (helper == NULL)
{
helper = new Helper();
}
res = helper;
pthread_mutex_unlock(&mutex);
pthread_setspecific(localHelper, res);
}
return res;
}
};
What are your comments/opinions?
Do you find any flaws in the idea or the implementation?
EDIT:
Helper is the type of the singleton object (I know the name is not the bet...I took it from the Java examples in the Wikipedia article about DCLP).
Foo is the Singleton container.
EDIT 2:
Because it seems to be a little bit misunderstanding that Foo is not a static class and how it is used, here an example of the usage:
static Foo foo;
.
.
.
foo.getHelper()->doSomething();
.
.
.
The reason that Foo's members are not static is simply that I was able to create/destroy the mutex and the TLS in the constructor/destructor.
If a RAII version of a C++ mutex / TLS class is used Foo can easily be switched to be static.
You seem to be calling:
pthread_mutex_init(&mutex);
...in the Helper() constructor. But that constructor is itself called in the function getHelper() (which should be static, I think) which uses the mutex. So the mutex appears to be initialised twice or not at all.
I find the code very confusing, I must say. Double-checked locking is not that complex. Why don't you start again, and this time create a Mutex class, which does the initialisation, and uses RAI to release the underlying pthread mutex? Then use this Mutex class to implement your locking.
This isn't the double-checked locking pattern. Most of the potential thread safety issues of the pattern are due to the fact the the a common state is read outside of a mutually exclusive lock, and then re-checked inside it.
What you are doing is checking a thread local data item, and then checking the common state inside a lock. This is more like a standard single check singleton pattern with a thread local cached value optimization.
To a casual glance it does look safe, though.
Looks interesting! Clever use of thread-local storage to reduce contention.
But I wonder if this is really different from the problematic approach outlined by Meyers/Alexandrescu...?
Say you have two threads for which the singleton is uninitialized (e.g. thread local slot is empty) and they run getHelper in parallel.
Won't they get into the same race over the helper member? You're still calling operator new and you're still assigning that to a member, so the risk of rogue reordering is still there, right?
EDIT: Ah, I see now. The lock is taken around the NULL-check, so it should be safe. The thread-local replaces the "first" NULL-check of the DCLP.
Obviously a few people are misunderstanding your intent/solution. Something to think about.
I think the real question to be asked is this:
Is calling pthread_getspecific() cheaper than a memory barrier?
Are you trying to make a thread-safe singleton or, implement Meyers & Andrescu's tips? The most straightforward thing is to use a
pthread_once
as is done below. Obviously you're not going for lock-free speed or anything, creating & destroying mutexes as you are so, the code may as well be simple and clear - this is the kind of thing pthread_once was made for. Note, the Helper ptr is static, otherwise it'd be harder to guarantee that there's only one Helper object:
// Foo.h
#include <pthread.h>
class Helper {};
class Foo
{
private:
static Helper* s_pTheHelper;
static ::pthread_once_t once_control;
private:
static void createHelper() { s_pTheHelper = new Helper(); }
public:
Foo()
{ // stuff
}
~Foo()
{ // stuff
}
static Helper* getInstance()
{
::pthread_once(&once_control, Foo::createHelper);
return s_pTheHelper;
}
};
// Foo.cpp
// ..
Helper* Foo::s_pTheHelper = NULL;
::pthread_once_t Foo::once_control = PTHREAD_ONCE_INIT;
// ..