Locking access to struct with mutex - c++

I have a struct containing two elements.
struct MyStruct {
int first_element_;
std::string second_element_;
}
The struct is shared between threads and therefore requires locking. My use case requires to lock access to the whole struct instead of just a specific member, so for example:
// start of Thread 1's routine
<Thread 1 "locks" struct>
<Thread 1 gets first_element_>
<Thread 1 sets second_elements_>
<Thread 2 wants to access struct -> blocks>
<Thread 1 sets first_element_>
// end of Thread 1's routine
<Thread 1 releases lock>
<Thread 2 reads/sets ....>
What's the most elegant way of doing that?
EDIT: To clarify, basically this question is about how to enforce any thread using this struct to lock a mutex (stored wherever) at the start of its routine and unlock the mutex at the end of it.
EDIT2: My current (ugly) solution is to have a mutex inside MyStruct and lock that mutex at the start of each thread's routine which uses MyStruct. However, if one thread "forgets" to lock that mutex, I run into synchronization problems.

You can have a class instead of the struct and implement getters and setters for first_element_ and second_element_. Besides those class members, you will also need a member of type std::mutex.
Eventually, your class could look like this:
class Foo {
public:
// ...
int get_first() const noexcept {
std::lock_guard<std::mutex> guard(my_mutex_);
return first_element_;
}
std::string get_second() const noexcept {
std::lock_guard<std::mutex> guard(my_mutex_);
return second_element_;
}
private:
int first_element_;
std::string second_element_;
std::mutex my_mutex_;
};
Please, note that getters are returning copies of the data members. If you want to return references (like std::string const& get_second() const noexcept) then you need to be careful because the code that gets the reference to the second element has no lock guard and there might be a race condition in such case.
In any case, your way to go is using std::lock_guards and std::mutexes around code that can be used by more than one thread.

You could implement something like this that combines the lock with the data:
class Wrapper
{
public:
Wrapper(MyStruct& value, std::mutex& mutex)
:value(value), lock(mutex) {}
MyStruct& value;
private:
std::unique_lock<std::mutex> lock;
};
class Container
{
public:
Wrapper get()
{
return Wrapper(value, mutex);
}
private:
MyStruct value;
std::mutex mutex;
};
The mutex is locked when you call get and unlocked automatically when Wrapper goes out of scope.

Related

mutex or atomic in const member function

I am reading Item 16 in Scott Meyers's Effective Modern C++.
In the later part of the item, he says
For a single variable or memory location requiring synchronization, use of a std::atomic is adequate, but once you get to two or more variables or memory locations that require manipulation as a unit, you should reach for
a mutex.
But I still don't see why it is adequate in the case of a single variable or memory location, take the polynomial example in this item
class Polynomial {
public:
using RootsType = std::vector<double>;
RootsType roots() const
{
if (!rootsAreValid) { // if cache not valid
.... // **very expensive compuation**, computing roots,
// store them in rootVals
rootsAreValid = true;
}
return rootVals;
}
private:
mutable std::atomic<bool> rootsAreValid{ false };
mutable RootsType rootVals{};
};
My question is:
If thread 1 is in the middle of heavily computing the rootVals before the rootAreValid gets assigned to true, and thread 2 also calls function roots(), and evaluates rootAreValid to false, then thread 2 will also steps into the heavy computation of rootVals, so for this case how an atomic bool is adequate? I still think a std::lock_guard<mutex> is needed to protect the entry to the rootVals computation.
In your example there are two variables being synchronized : rootVals and rootsAreValid. That particular item is referring to the case where only the atomic value requires synchronization. For example :
#include <atomic>
class foo
{
public:
void work()
{
++times_called;
/* multiple threads call this to do work */
}
private:
// Counts the number of times work() was called
std::atomic<int> times_called{0};
};
times_called is the only variable in this case.
I suggest you to avoid unnecessary heavy computation using the following code:
class Polynomial {
public:
using RootsType = std::vector<double>;
RootsType roots() const
{
if (!rootsAreValid) { // Acquiring mutex usually is not cheap, so we check the state without locking
std::lock_guard<std::mutex> lock_guard(sync);
if (!rootsAreValid) // The state could changed because the mutex was not owned at the first check
{
.... // **very expensive compuation**, computing roots,
// store them in rootVals
}
rootsAreValid = true;
}
return rootVals;
}
private:
mutable std::mutex sync;
mutable std::atomic<bool> rootsAreValid{ false };
mutable RootsType rootVals{};
};

Ensure that a thread doesn't lock a mutex twice?

Say I have a thread running a member method like runController in the example below:
class SomeClass {
public:
SomeClass() {
// Start controller thread
mControllerThread = std::thread(&SomeClass::runController, this)
}
~SomeClass() {
// Stop controller thread
mIsControllerThreadInterrupted = true;
// wait for thread to die.
std::unique_lock<std:::mutex> lk(mControllerThreadAlive);
}
// Both controller and external client threads might call this
void modifyObject() {
std::unique_lock<std::mutex> lock(mObjectMutex);
mObject.doSomeModification();
}
//...
private:
std::mutex mObjectMutex;
Object mObject;
std::thread mControllerThread;
std::atomic<bool> mIsControllerInterrupted;
std::mutex mControllerThreadAlive;
void runController() {
std::unique_lock<std::mutex> aliveLock(mControllerThreadAlive);
while(!mIsControllerInterruped) {
// Say I need to synchronize on mObject for all of these calls
std::unique_lock<std::mutex> lock(mObjectMutex);
someMethodA();
modifyObject(); // but calling modifyObject will then lock mutex twice
someMethodC();
}
}
//...
};
And some (or all) of the subroutines in runController need to modify data that is shared between threads and guarded by a mutex. Some (or all) of them, might also be called by other threads that need to modify this shared data.
With all the glory of C++11 at my disposal, how can I ensure that no thread ever locks a mutex twice?
Right now, I'm passing unique_lock references into the methods as parameters as below. But this seems clunky, difficult to maintain, potentially disastrous, etc...
void modifyObject(std::unique_lock<std::mutex>& objectLock) {
// We don't even know if this lock manages the right mutex...
// so let's waste some time checking that.
if(objectLock.mutex() != &mObjectMutex)
throw std::logic_error();
// Lock mutex if not locked by this thread
bool wasObjectLockOwned = objectLock.owns_lock();
if(!wasObjectLockOwned)
objectLock.lock();
mObject.doSomeModification();
// restore previous lock state
if(!wasObjectLockOwned)
objectLock.unlock();
}
Thanks!
There are several ways to avoid this kind of programming error. I recommend doing it on a class design level:
separate between public and private member functions,
only public member functions lock the mutex,
and public member functions are never called by other member functions.
If a function is needed both internally and externally, create two variants of the function, and delegate from one to the other:
public:
// intended to be used from the outside
int foobar(int x, int y)
{
std::unique_lock<std::mutex> lock(mControllerThreadAlive);
return _foobar(x, y);
}
private:
// intended to be used from other (public or private) member functions
int _foobar(int x, int y)
{
// ... code that requires locking
}

Barrier implementation using mutex & semaphore

This is an interview question :
Implement the barrier between n threads using mutexes and semaphores.
The solution I proposed :
class Barrier {
public:
Barrier(unsigned int n) : _n(n),_count(0),_s(0) {}
~Barrier() {}
void Wait() {
_m.lock();
_count++;
if (_count == _n) { _s.signal(); }
_m.unlock();
_s.wait();
_s.signal();
}
private:
unigned int _n;
unigned int _count;
Mutex _m;
Semaphore _s;
};
Is that solution Ok?
Can the Barrier be implemented using mutexes only?
Mutexes are exactly for only allowing one thread to execute a chunk of code and block other threads. I've always used or made classes that lock / unlock by scope on the constructor and destructor. You'd use it like this:
void workToDo()
{
CMutex mutex(sharedLockingObject);
// do your code
}
When the method finishes, the mutex goes out of scope, and calls the destructor. The constructor performs a blocking lock and does not unblock until the lock is acquired. This way you don't have to worry about exceptions leaving you with locked mutexes that blocks code when it shouldn't. The exception will naturally unravel the scope and call the destructors.

how to let a thread wait for destruction of an object

I want to have a thread wait for the destruction of a specific object by another thread. I thought about implementing it somehow like this:
class Foo {
private:
pthread_mutex_t* mutex;
pthread_cond_t* condition;
public:
Foo(pthread_mutex_t* _mutex, pthread_cond_t* _condition) : mutex(_mutex), condition(_condition) {}
void waitForDestruction(void) {
pthread_mutex_lock(mutex);
pthread_cond_wait(condition,mutex);
pthread_mutex_unlock(mutex);
}
~Foo(void) {
pthread_mutex_lock(mutex);
pthread_cond_signal(condition);
pthread_mutex_unlock(mutex);
}
};
I know, however, that i must handle spurious wakeups in the waitForDestruction method, but i can't call anything on 'this', because it could already be destructed.
Another possibility that crossed my mind was to not use a condition variable, but lock the mutex in the constructor, unlock it in the destructor and lock/unlock it in the waitForDestruction method - this should work with a non-recursive mutex, and iirc i can unlock a mutex from a thread which didn't lock it, right? Will the second option suffer from any spurious wakeups?
It is always a difficult matter. But how about these lines of code:
struct FooSync {
typedef boost::shared_ptr<FooSync> Ptr;
FooSync() : owner(boost::this_thread::get_id()) {
}
void Wait() {
assert(boost::this_thread::get_id() != owner);
mutex.lock();
mutex.unlock();
}
boost::mutex mutex;
boost::thread::id owner;
};
struct Foo {
Foo() { }
~Foo() {
for (size_t i = 0; i < waiters.size(); ++i) {
waiters[i]->mutex.unlock();
}
}
FooSync::Ptr GetSync() {
waiters.push_back(FooSync::Ptr(new FooSync));
waiters.back()->mutex.lock();
return waiters.back();
}
std::vector<FooSync::Ptr> waiters;
};
The solution above would allow any number of destruction-wait object on a single Foo object. As long as it will correctly manage memory occupied by these objects. It seems that nothing prevents Foo instances to be created on the stack.
Though the only drawback I see is that it requires that destruction-wait objects always created in a thread that "owns" Foo object instance otherwise the recursive lock will probably happen. There is more, if GetSync gets called from multiple threads race condition may occur after push_back.
EDIT:
Ok, i have reconsidered the problem and came up with new solution. Take a look:
typedef boost::shared_ptr<boost::shared_mutex> MutexPtr;
struct FooSync {
typedef boost::shared_ptr<FooSync> Ptr;
FooSync(MutexPtr const& ptr) : mutex(ptr) {
}
void Wait() {
mutex->lock_shared();
mutex->unlock_shared();
}
MutexPtr mutex;
};
struct Foo {
Foo() : mutex(new boost::shared_mutex) {
mutex->lock();
}
~Foo() {
mutex->unlock();
}
FooSync::Ptr GetSync() {
return FooSync::Ptr(new FooSync(mutex));
}
MutexPtr mutex;
};
Now it seems reasonably cleaner and much less points of code are subjects to race conditions. There is only one synchronization primitive shared between object itself and all the sync-objects. Some efforts must be taken to overcome the case when Wait called in the thread where the object itself is (like in my first example). If the target platform does not support shared_mutex it is ok to go along with good-ol mutex. shared_mutex seems to reduce the burden of locks when there are many of FooSyncs waiting.

Thread-safe initialization of atomic variable in C++

Consider the following C++11 code where class B is instantiated and used by multiple threads. Because B modifies a shared vector, I have to lock access to it in the ctor and member function foo of B. To initialize the member variable id I use a counter that is an atomic variable because I access it from multiple threads.
struct A {
A(size_t id, std::string const& sig) : id{id}, signature{sig} {}
private:
size_t id;
std::string signature;
};
namespace N {
std::atomic<size_t> counter{0};
typedef std::vector<A> As;
std::vector<As> sharedResource;
std::mutex barrier;
struct B {
B() : id(++counter) {
std::lock_guard<std::mutex> lock(barrier);
sharedResource.push_back(As{});
sharedResource[id].push_back(A("B()", id));
}
void foo() {
std::lock_guard<std::mutex> lock(barrier);
sharedResource[id].push_back(A("foo()", id));
}
private:
const size_t id;
};
}
Unfortunately, this code contains a race condition and does not work like this (sometimes the ctor and foo() do not use the same id). If I move the initialization of id to the ctor body which is locked by a mutex, it works:
struct B {
B() {
std::lock_guard<std::mutex> lock(barrier);
id = ++counter; // counter does not have to be an atomic variable and id cannot be const anymore
sharedResource.push_back(As{});
sharedResource[id].push_back(A("B()", id));
}
};
Can you please help me understanding why the latter example works (is it because it does not use the same mutex?)? Is there a safe way to initialize id in the initializer list of B without locking it in the body of the ctor? My requirements are that id must be const and that the initialization of id takes place in the initializer list.
First, there's still a fundamental logic problem in the posted code.
You use ++ counter as id. Consider the very first creation of B,
in a single thread. B will have id == 1; after the push_back of
sharedResource, you will have sharedResource.size() == 1, and the
only legal index for accessing it will be 0.
In addition, there's a clear race condition in the code. Even if you
correct the above problem (initializing id with counter ++), suppose
that both counter and sharedResource.size() are currently 0;
you've just initialized. Thread one enters the constructor of B,
increments counter, so:
counter == 1
sharedResource.size() == 0
It is then interrupted by thread 2 (before it acquires the mutex), which
also increments counter (to 2), and uses its previous value (1) as
id. After the push_back in thread 2, however, we have only
sharedResource.size() == 1, and the only legal index is 0.
In practice, I would avoid two separate variables (counter and
sharedResource.size()) which should have the same value. From
experience: two things that should be the same won't be—the only
time redundant information should be used is when it is used for
control; i.e. at some point, you have an assert( id ==
sharedResource.size() ), or something similar. I'd use something like:
B::B()
{
std::lock_guard<std::mutex> lock( barrier );
id = sharedResource.size();
sharedResource.push_back( As() );
// ...
}
Or if you want to make id const:
struct B
{
static int getNewId()
{
std::lock_guard<std::mutex> lock( barrier );
int results = sharedResource.size();
sharedResource.push_back( As() );
return results;
}
B::B() : id( getNewId() )
{
std::lock_guard<std::mutex> lock( barrier );
// ...
}
};
(Note that this requires acquiring the mutex twice. Alternatively, you
could pass the additional information necessary to complete updating
sharedResource to getNewId(), and have it do the whole job.)
When an object is being initialized, it should be owned by a single thread. Then when it is done being initialized, it is made shared.
If there is such a thing as thread-safe initialization, it means ensuring that an object has not become accessible to other threads before being initialized.
Of course, we can discuss thread-safe assignment of an atomic variable. Assignment is different from initialization.
You are in the sub-constructor list initializing the vector. This is not really an atomic operation. so in a multi-threaded system you could get hit by two threads at the same time. This is changing what id is. welcome to thread safety 101!
moving the initialization into the constructor surrounded by the lock makes it so only one thread can access and set the vector.
The other way to fix this would be to move this into a singelton pattern. But then you are paying for the lock every time you get the object.
Now you can get into things like double checked locking :)
http://en.wikipedia.org/wiki/Double-checked_locking