Boost Interprocess Lock persistence - c++

I have the following code which crashes. I suspect this is because of allocating a reference to the stack that I have this problem. But I want to avoid the cost of having to stack allocate a mutex, and a scoped lock every time
class Cache {
public:
void createCacheLock(const char* name) {
named_mutex mutex_(open_only, name);
mutex = &mutex_;
scoped_lock<named_mutex> cache_lock_(mutex_, defer_lock);
cache_lock=&cache_lock_ ;
}
void updateCache(const char* name, int newvalue) {
cache_lock->lock() ;
/* Do update work */
cache_lock->unlock() ;
}
private:
named_mutex* mutex ;
scoped_lock<named_mutex>* cache_lock;
}
I then exist the function (cache_lock is a class field), and when trying to call cache_lock.lock from within a different class method, my program crashes (in this case the updateCache crashes in the cache_lock->lock() part)
I have two questions: how can I create a "persistent" cache_lock, so that I can reuse it without the call to named_mutex (open_only, etc.)?
ak I would like to avoid doing this every time
void updateCache(const char* name, int newvalue) {
named_mutex mutex_(open_only, name);
scoped_lock<named_mutex> cache_lock_(mutex_, defer_lock);
/* Do update work */
cache_lock->unlock() ;
}
Secondly, repeating the procedure above (namely, finding the mutex, and creating a lock from it) an expensive operation?

You have pointers to local variables. Local variables exists on the stack only while the function runs, when the function returns those objects are destroyed and their destructors called. The memory those object occupied will after the function returns be reused by the next function to be called. This means that the pointers you have not only points to possibly destructed objects, the can also point to memory being used for something completely different. The same goes for references of course.
Allocate those objects on the heap with new instead, or use smart pointers.
You can call specific constructors of objects in your class in your constructors initializer list:
class cache
{
public:
cache(const std::string& lock_name)
: mutex_(open_only, lock_name),
cache_lock_(mutex_, defer_lock)
{}
// ...
private:
named_mutex mutex_;
scoped_lock<named_mutex> cache_lock_;
// ...
};

Related

Is this inter-thread object sharing strategy sound?

I'm trying to come up with a fast way of solving the following problem:
I have a thread which produces data, and several threads which consume it. I don't need to queue produced data, because data is produced much more slowly than it is consumed (and even if this failed to be the case occasionally, it wouldn't be a problem if a data point were skipped occasionally). So, basically, I have an object that encapsulates the "most recent state", which only the producer thread is allowed to update.
My strategy is as follows (please let me know if I'm completely off my rocker):
I've created three classes for this example: Thing (the actual state object), SharedObject<Thing> (an object that can be local to each thread, and gives that thread access to the underlying Thing), and SharedObjectManager<Thing>, which wraps up a shared_ptr along with a mutex.
The instance of the SharedObjectManager (SOM) is a global variable.
When the producer starts, it instantiates a Thing, and tells the global SOM about it. It then makes a copy, and does all of it's updating work on that copy. When it is ready to commit it's changes to the Thing, it passes the new Thing to the global SOM, which locks it's mutex, updates the shared pointer it keeps, and then releases the lock.
Meanwhile, the consumer threads all intsantiate SharedObject<Thing>. these objects each keep a pointer to the global SOM, as well as a cached copy of the shared_ptr kept by the SOM... It keeps this cached until update() is explicitly called.
I believe this is getting hard to follow, so here's some code:
#include <mutex>
#include <iostream>
#include <memory>
class Thing
{
private:
int _some_member = 10;
public:
int some_member() const { return _some_member; }
void some_member(int val) {_some_member = val; }
};
// one global instance
template<typename T>
class SharedObjectManager
{
private:
std::shared_ptr<T> objPtr;
std::mutex objLock;
public:
std::shared_ptr<T> get_sptr()
{
std::lock_guard<std::mutex> lck(objLock);
return objPtr;
}
void commit_new_object(std::shared_ptr<T> new_object)
{
std::lock_guard<std::mutex> lck (objLock);
objPtr = new_object;
}
};
// one instance per consumer thread.
template<typename T>
class SharedObject
{
private:
SharedObjectManager<T> * som;
std::shared_ptr<T> cache;
public:
SharedObject(SharedObjectManager<T> * backend) : som(backend)
{update();}
void update()
{
cache = som->get_sptr();
}
T & operator *()
{
return *cache;
}
T * operator->()
{
return cache.get();
}
};
// no actual threads in this test, just a quick sanity check.
SharedObjectManager<Thing> glbSOM;
int main(void)
{
glbSOM.commit_new_object(std::make_shared<Thing>());
SharedObject<Thing> myobj(&glbSOM);
std::cout<<myobj->some_member()<<std::endl;
// prints "10".
}
The idea for use by the producer thread is:
// initialization - on startup
auto firstStateObj = std::make_shared<Thing>();
glbSOM.commit_new_object(firstStateObj);
// main loop
while (1)
{
// invoke copy constructor to copy the current live Thing object
auto nextState = std::make_shared<Thing>(*(glbSOM.get_sptr()));
// do stuff to nextState, gradually filling out it's new value
// based on incoming data from other sources, etc.
...
// commit the changes to the shared memory location
glbSOM.commit_new_object(nextState);
}
The use by consumers would be:
SharedObject<Thing> thing(&glbSOM);
while(1)
{
// think about the data contained in thing, and act accordingly...
doStuffWith(thing->some_member());
// re-cache the thing
thing.update();
}
Thanks!
That is way overengineered. Instead, I'd suggest to do following:
Create a pointer to Thing* theThing together with protection mutex. Either a global one, or shared by some other means. Initialize it to nullptr.
In your producer: use two local objects of Thing type - Thing thingOne and Thing thingTwo (remember, thingOne is no better than thingTwo, but one is called thingOne for a reason, but this is a thing thing. Watch out for cats.). Start with populating thingOne. When done, lock the mutex, copy thingOne address to theThing, unlock the mutex. Start populating thingTwo. When done, see above. Repeat untill killed.
In every listener: (make sure the pointer is not nullptr). Lock the mutex. Make a copy of the object pointed two by the theThing. Unlock the mutex. Work with your copy. Burn after reading. Repeat untill killed.

How to release heap memory of thread local storage

I have a structure used for thread local storage like this:
namespace {
typedef boost::unordered_map< std::string, std::vector<xxx> > YYY;
boost::thread_specific_ptr<YYY> cache;
void initCache() {
//The first time called by the current thread.
if (!cache.get()){
cache.reset(new YYY());
}
}
void clearCache() {
if (cache.get()){
cache.reset();
}
}
}
And a class whose object could have been created by the main thread:
class A {
public:
void f() {
initCache();
//and for example:
insertIntoCache();
}
~A(){
clearCache();// <-- Does/Can this do anything good ??
}
}
Multiple threads can access object(s) of A stored, for example, in a global container. Each of these threads need to call A::f() from time to time. So they create their own copy of cache on the heap once , and finally join when they done with all their jobs.
So the question is : who is going to clean-up threads' memory? and How?
Thank you
There's no reason to call clearCache().
Once the thread exits or the thread_specific_ptr goes out of scope, the cleanup function will be invoked. If you don't pass a cleanup function to the thread_specific_ptr's constructor, it will just use delete.

Read-write thread-safe smart pointer in C++, x86-64

I develop some lock free data structure and following problem arises.
I have writer thread that creates objects on heap and wraps them in smart pointer with reference counter. I also have a lot of reader threads, that work with these objects. Code can look like this:
SmartPtr ptr;
class Reader : public Thread {
virtual void Run {
for (;;) {
SmartPtr local(ptr);
// do smth
}
}
};
class Writer : public Thread {
virtual void Run {
for (;;) {
SmartPtr newPtr(new Object);
ptr = newPtr;
}
}
};
int main() {
Pool* pool = SystemThreadPool();
pool->Run(new Reader());
pool->Run(new Writer());
for (;;) // wait for crash :(
}
When I create thread-local copy of ptr it means at least
Read an address.
Increment reference counter.
I can't do these two operations atomically and thus sometimes my readers work with deleted object.
The question is - what kind of smart pointer should I use to make read-write access from several threads with correct memory management possible? Solution should exist, since Java programmers don't even care about such a problem, simply relying on that all objects are references and are deleted only when nobody uses them.
For PowerPC I found http://drdobbs.com/184401888, looks nice, but uses Load-Linked and Store-Conditional instructions, that we don't have in x86.
As far I as I understand, boost pointers provide such functionality only using locks. I need lock free solution.
boost::shared_ptr have atomic_store which uses a "lock-free" spinlock which should be fast enough for 99% of possible cases.
boost::shared_ptr<Object> ptr;
class Reader : public Thread {
virtual void Run {
for (;;) {
boost::shared_ptr<Object> local(boost::atomic_load(&ptr));
// do smth
}
}
};
class Writer : public Thread {
virtual void Run {
for (;;) {
boost::shared_ptr<Object> newPtr(new Object);
boost::atomic_store(&ptr, newPtr);
}
}
};
int main() {
Pool* pool = SystemThreadPool();
pool->Run(new Reader());
pool->Run(new Writer());
for (;;)
}
EDIT:
In response to comment below, the implementation is in "boost/shared_ptr.hpp"...
template<class T> void atomic_store( shared_ptr<T> * p, shared_ptr<T> r )
{
boost::detail::spinlock_pool<2>::scoped_lock lock( p );
p->swap( r );
}
template<class T> shared_ptr<T> atomic_exchange( shared_ptr<T> * p, shared_ptr<T> r )
{
boost::detail::spinlock & sp = boost::detail::spinlock_pool<2>::spinlock_for( p );
sp.lock();
p->swap( r );
sp.unlock();
return r; // return std::move( r )
}
With some jiggery-pokery you should be able to accomplish this using InterlockedCompareExchange128. Store the reference count and pointer in a 2 element __int64 array. If reference count is in array[0] and pointer in array[1] the atomic update would look like this:
while(true)
{
__int64 comparand[2];
comparand[0] = refCount;
comparand[1] = pointer;
if(1 == InterlockedCompareExchange128(
array,
pointer,
refCount + 1,
comparand))
{
// Pointer is ready for use. Exit the while loop.
}
}
If an InterlockedCompareExchange128 intrinsic function isn't available for your compiler then you may use the underlying CMPXCHG16B instruction instead, if you don't mind mucking around in assembly language.
The solution proposed by RobH doesn't work. It has the same problem as the original question: when accessing the reference count object, it might already have been deleted.
The only way I see of solving the problem without a global lock (as in boost::atomic_store) or conditional read/write instructions is to somehow delay the destruction of the object (or the shared reference count object if such thing is used). So zennehoy has a good idea but his method is too unsafe.
The way I might do it is by keeping copies of all the pointers in the writer thread so that the writer can control the destruction of the objects:
class Writer : public Thread {
virtual void Run() {
list<SmartPtr> ptrs; //list that holds all the old ptr values
for (;;) {
SmartPtr newPtr(new Object);
if(ptr)
ptrs.push_back(ptr); //push previous pointer into the list
ptr = newPtr;
//Periodically go through the list and destroy objects that are not
//referenced by other threads
for(auto it=ptrs.begin(); it!=ptrs.end(); )
if(it->refCount()==1)
it = ptrs.erase(it);
else
++it;
}
}
};
However there are still requirements for the smart pointer class. This doesn't work with shared_ptr as the reads and writes are not atomic. It almost works with boost::intrusive_ptr. The assignment on intrusive_ptr is implemented like this (pseudocode):
//create temporary from rhs
tmp.ptr = rhs.ptr;
if(tmp.ptr)
intrusive_ptr_add_ref(tmp.ptr);
//swap(tmp,lhs)
T* x = lhs.ptr;
lhs.ptr = tmp.ptr;
tmp.ptr = x;
//destroy temporary
if(tmp.ptr)
intrusive_ptr_release(tmp.ptr);
As far as I understand the only thing missing here is a compiler level memory fence before lhs.ptr = tmp.ptr;. With that added, both reading rhs and writing lhs would be thread-safe under strict conditions: 1) x86 or x64 architecture 2) atomic reference counting 3) rhs refcount must not go to zero during the assignment (guaranteed by the Writer code above) 4) only one thread writing to lhs (using CAS you could have several writers).
Anyway, you could create your own smart pointer class based on intrusive_ptr with necessary changes. Definitely easier than re-implementing shared_ptr. And besides, if you want performance, intrusive is the way to go.
The reason this works much more easily in java is garbage collection. In C++, you have to manually ensure that a value is not just starting to be used by a different thread when you want to delete it.
A solution I've used in a similar situation is to simply delay the deletion of the value. I create a separate thread that iterates through a list of things to be deleted. When I want to delete something, I add it to this list with a timestamp. The deleting thread waits until some fixed time after this timestamp before actually deleting the value. You just have to make sure that the delay is large enough to guarantee that any temporary use of the value has completed.
100 milliseconds would have been enough in my case, I chose a few seconds to be safe.

Does a static object within a function introduce a potential race condition?

I'm curious about the following code:
class MyClass
{
public:
MyClass() : _myArray(new int[1024]) {}
~MyClass() {delete [] _myArray;}
private:
int * _myArray;
};
// This function may be called by different threads in an unsynchronized manner
void MyFunction()
{
static const MyClass _myClassObject;
[...]
}
Is there a possible race condition in the above code? Specifically, is the compiler likely to generate code equivalent to the following, "behind the scenes"?
void MyFunction()
{
static bool _myClassObjectInitialized = false;
if (_myClassObjectInitialized == false)
{
_myClassObjectInitialized = true;
_myClassObject.MyClass(); // call constructor to set up object
}
[...]
}
... in which case, if two threads were to call MyFunction() nearly-simultaneously, then _myArray might get allocated twice, causing a memory leak?
Or is this handled correctly somehow?
There's absolutely a possible race condition there. Whether or not there actually is one is pretty damn undefined. You shouldn't use such code in single-threaded scenarios because it's bad design, but it could be the death of your app in multithreaded. Anything that is static const like that should probably go in a convenient namespace, and get allocated at the start of the application.
Use a semaphore if you're using multiple threads, its's what they're for.

Need some advice to make the code multithreaded

I received a code that is not for multi-threaded app, now I have to modify the code to support for multi-threaded.
I have a Singleton class(MyCenterSigltonClass) that based on instruction in:
http://en.wikipedia.org/wiki/Singleton_pattern
I made it thread-safe
Now I see inside the class that contains 10-12 members, some with getter/setter methods.
Some members are declared as static and are class pointer like:
static Class_A* f_static_member_a;
static Class_B* f_static_member_b;
for these members, I defined a mutex(like mutex_a) INSIDE the class(Class_A) , I didn't add the mutex directly in my MyCenterSigltonClass, the reason is they are one to one association with my MyCenterSigltonClass, I think I have option to define mutex in the class(MyCenterSigltonClass) or (Class_A) for f_static_member_a.
1) Am I right?
Also, my Singleton class(MyCenterSigltonClass) contains some other members like
Class_C f_classC;
for these kind of member variables, should I define a mutex for each of them in MyCenterSigltonClass to make them thread-safe? what would be a good way to handle these cases?
Appreciate for any suggestion.
-Nima
Whether the members are static or not doesn't really matter. How you protect the member variables really depends on how they are accessed from public methods.
You should think about a mutex as a lock that protects some resource from concurrent read/write access. You don't need to think about protecting the internal class objects necessarily, but the resources within them. You also need to consider the scope of the locks you'll be using, especially if the code wasn't originally designed to be multithreaded. Let me give a few simple examples.
class A
{
private:
int mValuesCount;
int* mValues;
public:
A(int count, int* values)
{
mValuesCount = count;
mValues = (count > 0) ? new int[count] : NULL;
if (mValues)
{
memcpy(mValues, values, count * sizeof(int));
}
}
int getValues(int count, int* values) const
{
if (mValues && values)
{
memcpy(values, mValues, (count < mValuesCount) ? count : mValuesCount);
}
return mValuesCount;
}
};
class B
{
private:
A* mA;
public:
B()
{
int values[5] = { 1, 2, 3, 4, 5 };
mA = new A(5, values);
}
const A* getA() const { return mA; }
};
In this code, there's no need to protect mA because there's no chance of conflicting access across multiple threads. None of the threads can modify the state of mA, so all concurrent access just reads from mA. However, if we modify class A:
class A
{
private:
int mValuesCount;
int* mValues;
public:
A(int count, int* values)
{
mValuesCount = 0;
mValues = NULL;
setValues(count, values);
}
int getValues(int count, int* values) const
{
if (mValues && values)
{
memcpy(values, mValues, (count < mValuesCount) ? count : mValuesCount);
}
return mValuesCount;
}
void setValues(int count, int* values)
{
delete [] mValues;
mValuesCount = count;
mValues = (count > 0) ? new int[count] : NULL;
if (mValues)
{
memcpy(mValues, values, count * sizeof(int));
}
}
};
We can now have multiple threads calling B::getA() and one thread can read from mA while another thread writes to mA. Consider the following thread interaction:
Thread A: a->getValues(maxCount, values);
Thread B: a->setValues(newCount, newValues);
It's possible that Thread B will delete mValues while Thread A is in the middle of copying it. In this case, you would need a mutex within class A to protect access to mValues and mValuesCount:
int getValues(int count, int* values) const
{
// TODO: Lock mutex.
if (mValues && values)
{
memcpy(values, mValues, (count < mValuesCount) ? count : mValuesCount);
}
int returnCount = mValuesCount;
// TODO: Unlock mutex.
return returnCount;
}
void setValues(int count, int* values)
{
// TODO: Lock mutex.
delete [] mValues;
mValuesCount = count;
mValues = (count > 0) ? new int[count] : NULL;
if (mValues)
{
memcpy(mValues, values, count * sizeof(int));
}
// TODO: Unlock mutex.
}
This will prevent concurrent read/write on mValues and mValuesCount. Depending on the locking mechanisms available in your environment, you may be able to use a read-only locking mechanism in getValues() to prevent multiple threads from blocking on concurrent read access.
However, you'll also need to understand the scope of the locking you need to implement if class A is more complex:
class A
{
private:
int mValuesCount;
int* mValues;
public:
A(int count, int* values)
{
mValuesCount = 0;
mValues = NULL;
setValues(count, values);
}
int getValueCount() const { return mValuesCount; }
int getValues(int count, int* values) const
{
if (mValues && values)
{
memcpy(values, mValues, (count < mValuesCount) ? count : mValuesCount);
}
return mValuesCount;
}
void setValues(int count, int* values)
{
delete [] mValues;
mValuesCount = count;
mValues = (count > 0) ? new int[count] : NULL;
if (mValues)
{
memcpy(mValues, values, count * sizeof(int));
}
}
};
In this case, you could have the following thread interaction:
Thread A: int maxCount = a->getValueCount();
Thread A: // allocate memory for "maxCount" int values
Thread B: a->setValues(newCount, newValues);
Thread A: a->getValues(maxCount, values);
Thread A has been written as though calls to getValueCount() and getValues() will be an uninterrupted operation, but Thread B has potentially changed the count in the middle of Thread A's operations. Depending on whether the new count is larger or smaller than the original count, it may take a while before you discover this problem. In this case, class A would need to be redesigned or it would need to provide some kind of transaction support so the thread using class A could block/unblock other threads:
Thread A: a->lockValues();
Thread A: int maxCount = a->getValueCount();
Thread A: // allocate memory for "maxCount" int values
Thread B: a->setValues(newCount, newValues); // Blocks until Thread A calls unlockValues()
Thread A: a->getValues(maxCount, values);
Thread A: a->unlockValues();
Thread B: // Completes call to setValues()
Since the code wasn't initially designed to be multithreaded, it's very likely you'll run into these kinds of issues where one method call uses information from an earlier call, but there was never a concern for the state of the object changing between those calls.
Now, begin to imagine what could happen if there are complex state dependencies among the objects within your singleton and multiple threads can modify the state of those internal objects. It can all become very, very messy with a large number of threads and debugging can become very difficult.
So as you try to make your singleton thread-safe, you need to look at several layers of object interactions. Some good questions to ask:
Do any of the methods on the singleton reveal internal state that may change between method calls (as in the last example I mention)?
Are any of the internal objects revealed to clients of the singleton?
If so, do any of the methods on those internal objects reveal internal state that may change between method calls?
If internal objects are revealed, do they share any resources or state dependencies?
You may not need any locking if you're just reading state from internal objects (first example). You may need to provide simple locking to prevent concurrent read/write access (second example). You may need to redesign the classes or provide clients with the ability to lock object state (third example). Or you may need to implement more complex locking where internal objects share state information across threads (e.g. a lock on a resource in class Foo requires a lock on a resource in class Bar, but locking that resource in class Bar doesn't necessarily require a lock on a resource in class Foo).
Implementing thread-safe code can become a complex task depending on how all your objects interact. It can be much more complicated than the examples I've given here. Just be sure you clearly understand how your classes are used and how they interact (and be prepared to spend some time tracking down difficult to reproduce bugs).
If this is the first time you're doing threading, consider not accessing the singleton from the background thread. You can get it right, but you probably won't get it right the first time.
Realize that if your singleton exposes pointers to other objects, these should be made thread safe as well.
You don't have to define a mutex for each member. For example, you could instead use a single mutex to synchronize access each to member, e.g.:
class foo
{
public:
...
void some_op()
{
// acquire "lock_" and release using RAII ...
Lock(lock_);
a++;
}
void set_b(bar * b)
{
// acquire "lock_" and release using RAII ...
Lock(lock_);
b_ = b;
}
private:
int a_;
bar * b_;
mutex lock_;
}
Of course a "one lock" solution may be not suitable in your case. That's up to you to decide. Regardless, simply introducing locks doesn't make the code thread-safe. You have to use them in the right place in the right way to avoid race conditions, deadlocks, etc. There are lots of concurrency issues you could run in to.
Furthermore you don't always need mutexes, or other threading mechanisms like TSS, to make code thread-safe. For example, the following function "func" is thread-safe:
class Foo;
void func (Foo & f)
{
f.some_op(); // Foo::some_op() of course needs to be thread-safe.
}
// Thread 1
Foo a;
func(&a);
// Thread 2
Foo b;
func(&b);
While the func function above is thread-safe the operations it invokes may not be thread-safe. The point is you don't always need to pepper your code with mutexes and other threading mechanisms to make the code thread safe. Sometimes restructuring the code is sufficient.
There's a lot of literature on multithreaded programming. It's definitely not easy to get right so take your time in understanding the nuances, and take advantage of existing frameworks like Boost.Thread to mitigate some of the inherent and accidental complexities that exist in the lower-level multithreading APIs.
I'd really recommend the Interlocked.... Methods to increment, decrement and CompareAndSwap values when using code that needs to be multi-thread-aware. I don't have 1st-hand C++ experience but a quick search for http://www.bing.com/search?q=c%2B%2B+interlocked reveals lots of confirming advice. If you need perf, these will likely be faster than locking.
As stated by #Void a mutex alone is not always the solution to a concurrency problem:
Regardless, simply introducing locks doesn't make the code
thread-safe. You have to use them in the right place in the right way
to avoid race conditions, deadlocks, etc. There are lots of
concurrency issues you could run in to.
I want to add another example:
class MyClass
{
mutex m_mutex;
AnotherClass m_anotherClass;
void setObject(AnotherClass& anotherClass)
{
m_mutex.lock();
m_anotherClass = anotherClass;
m_mutex.unlock();
}
AnotherClass getObject()
{
AnotherClass anotherClass;
m_mutex.lock();
anotherClass = m_anotherClass;
m_mutex.unlock();
return anotherClass;
}
}
In this case the getObject() method is always safe because is protected with mutex and you have a copy of the object which is returned to the caller which may be a different class and thread. This means you are working on a copy which might be old (in the meantime another thread might have changed the m_anotherClass by calling setObject() ).Now what if you turn m_anotherClass to a pointer instead of an object-variable ?
class MyClass
{
mutex m_mutex;
AnotherClass *m_anotherClass;
void setObject(AnotherClass *anotherClass)
{
m_mutex.lock();
m_anotherClass = anotherClass;
m_mutex.unlock();
}
AnotherClass * getObject()
{
AnotherClass *anotherClass;
m_mutex.lock();
anotherClass = m_anotherClass;
m_mutex.unlock();
return anotherClass;
}
}
This is an example where a mutex is not enough to solve all the problems.
With pointers you can have a copy only of the pointer but the pointed object is the same in the both the caller and the method. So even if the pointer was valid at the time that the getObject() was called you don't have any guarantee that the pointed value will exists during the operation you are performing with it. This is simply because you don't have control on the object lifetime. That's why you should use object-variables as much as possible and avoid pointers (if you can).