Ok, the question title is a bit hard to phrase. What I am trying to achieve is create a template class with get/set functions that can handle simple types and structures.
This is simple for types such as integers and char, etc... But when the template type 'T' is a struct then it gets harder.
For example, here is a template class, where I have omitted various parts of it (such as constructor, etc), but it shows the get/set function:
EDIT: Only this class is allowed to modify the data, so passing a reference outside is not allowed. The reason is that I want to do a mutex around the set/get. I will/have update the functions...
template <class T> class storage
{
private:
T m_var;
pthread_mutex_t m_mutex;
public:
void set(T value)
{
pthread_mutex_lock(&m_mutex);
m_var = value;
pthread_mutex_unlock(&m_mutex);
}
T get(void)
{
T tmp;
// Note: Can't return the value within the mutex otherwise we could get into a deadlock. So
// we have to first read the value into a temporary variable and then return that.
pthread_mutex_lock(&m_mutex);
tmp = m_var;
pthread_mutex_unlock(&m_mutex);
return tmp;
}
};
Then consider the following code:
struct shape_t
{
int numSides;
int x;
int y;
}
int main()
{
storage<int> intStore;
storage<shape_t> shapeStore;
// To set int value I can do:
intStore.set(2);
// To set shape_t value I can do:
shape_t tempShape;
tempShape.numSides = 2;
tempShape.x = 5;
tempShape.y = 4;
shapeStore.set(tempShape);
// To modify 'x' (and keep y and numSides the same) I have to do:
shape_t tempShape = shapeStore.get();
tempShape.x = 5;
shapeStore.set(tempShape);
}
What I want to be able to do, if its possible, is to set the members of shape_t individually via some means in the template class, something like:
shapeStore.set(T::numSides, 2);
shapeStore.set(T::x, 5);
shapeStore.set(T::y, 4);
And not have to use a temp var. Is this possible? how?
I looked at this answer, but it did not quite do what I wanted because it is for a specific structure type
Make your get() member return a reference:
T& get()
{
return m_var;
}
Then you could say
shapeStore.get().x = 42;
Note it is good practice to add a const overload:
const T& get() const
{
return m_var;
}
Also note that if your get and set methods really do nothing special, as in your example, you might consider making the data public and doing away with getters/setters:
template <class T> struct storage
{
T m_var;
};
Edit: If you want to allow synchronised changes to the member, an option is to have a method that takes a modifying function. The function is applied inside the class, in your case, protected by the mutex. For example,
template <class T> struct storage
{
storage() : m_var() {}
void do_stuff(std::function<void(T&)> f)
{
std::lock_guard<std::mutex> lock(m_mutex);
f(m_var);
}
private:
T m_var;
std::mutex_t m_mutex;
};
Then you can modify members in a synchronised manner:
storage<shape_t> shapeStore;
shapeStore.do_stuff([](shape_t& s)
{ s.x = 42;
s.y = 100; });
If you don't have C++11 you can pass a function instead:
void foo(shape_t& s) { s.x = 42; }
shapeStore.do_stuff(foo);
Your design is fairly workable for primitive types, but it requires you to replicate the entire interface of class types and quickly becomes unmanageable. Even in the case of primitive types, you might want to enable more complex atomic operations than simply get and set, e.g., increment or add or multiply. The key to simplifying the design is to realize that you don't actually want to interpose on every single operation the client code performs on the data object, you only need to interpose before and after the client code atomically performs a sequence of operations.
Anthony Williams wrote a great article in Doctor Dobb's Journal years ago about this exact problem using a design where the manager object provides a handle to the client code that the client uses to access the managed object. The manager interposes only on the handle creation and destruction allowing clients with a handle unfettered access to the managed object. (See the recent proposal for standardization for excruciating detail.)
You could apply the approach to your problem fairly easily. First, I'll replicate some parts of the C++11 threads library because they make it MUCH easier to write correct code in the presence of exceptions:
class mutex {
pthread_mutex_t m_mutex;
// Forbid copy/move
mutex(const mutex&); // C++11: = delete;
mutex& operator = (const mutex&); // C++11: = delete;
public:
mutex(pthread_mutex_) { pthread_mutex_init(&m_mutex, NULL); }
~mutex() { pthread_mutex_destroy(&m_mutex); }
void lock() { pthread_mutex_lock(&m_mutex); }
void unlock() { pthread_mutex_unlock(&m_mutex); }
bool try_lock() { return pthread_mutex_trylock(&m_mutex) == 0; }
};
class lock_guard {
mutex& mtx;
public:
lock_guard(mutex& mtx_) : mtx(mtx_) { mtx.lock(); }
~lock_guard() { mtx.unlock(); }
};
The class mutex wraps up a pthread_mutex_t concisely. It handles creation and destruction automatically, and saves our poor fingers some keystrokes. lock_guard is a handy RAII wrapper that automatically unlocks the mutex when it goes out of scope.
storage then becomes incredibly simple:
template <class> class handle;
template <class T> class storage
{
private:
T m_var;
mutex m_mutex;
public:
storage() : m_var() {}
storage(const T& var) : m_var(var) {}
friend class handle<T>;
};
It's simply a box with a T and a mutex inside. storage trusts the handle class to be friendly and allows it poke around its insides. It should be clear that storage does not directly provide any access to m_var, so the only way it could possibly be modified is via a handle.
handle is a bit more complex:
template <class T>
class handle {
T& m_data;
lock_guard m_lock;
public:
handle(storage<T>& s) : m_data(s.m_var), m_lock(s.m_mutex) {}
T& operator* () const {
return m_data;
}
T* operator -> () const {
return &m_data;
}
};
it keeps a reference to the data item and holds one of those handy automatic lock objects. The use of operator* and operator-> make handle objects behave like a pointer to T.
Since only way to access the object inside storage is through a handle, and a handle guarantees that the appropriate mutex is held during its lifetime, there's no way for client code to forget to lock the mutex, or to accidentally access the stored object without locking the mutex. It can't even forget to unlock the mutex, which is nice as well. Usage is simple (See it working live at Coliru):
storage<int> foo;
void example() {
{
handle<int> p(foo);
// We have exclusive access to the stored int.
*p = 42;
}
// other threads could access foo here.
{
handle<int> p(foo);
// We have exclusive access again.
*p *= 12;
// We can safely return with the mutex held,
// it will be unlocked for us in the handle destructor.
return ++*p;
}
}
You would code the program in the OP as:
struct shape_t
{
int numSides;
int x;
int y;
};
int main()
{
storage<int> intStore;
storage<shape_t> shapeStore;
// To set int value I can do:
*handle<int>(intStore) = 2;
{
// To set shape_t value I can do:
handle<shape_t> ptr(shapeStore);
ptr->numSides = 2;
ptr->x = 5;
ptr->y = 4;
}
// To modify 'x' (and keep y and numSides the same) I have to do:
handle<shape_t>(shapeStore)->x = 5;
}
I can propose you alternative solution.
When you need you can get special template class that allows managing containing object.
template < typename T >
class SafeContainer
{
public:
// Variadic template for constructor
template<typename ... ARGS>
SafeContainer(ARGS ...arguments)
: m_data(arguments ...)
{};
// RAII mutex
class Accessor
{
public:
// lock when created
Accessor(SafeContainer<T>* container)
:m_container(container)
{
m_container->m_mutex.lock();
}
// Unlock when destroyed
~Accessor()
{
m_container->m_mutex.unlock();
}
// Access methods
T* operator -> ()
{
return &m_container->m_data;
}
T& operator * ()
{
return m_container->data;
}
private:
SafeContainer<T> *m_container;
};
friend Accessor;
Accessor get()
{
return Accessor(this);
}
private:
T m_data;
// Should be using recursive mutex to avoid deadlocks
std::mutex m_mutex;
};
Example:
struct shape_t
{
int numSides;
int x;
int y;
};
int main()
{
SafeContainer<shape_t> shape;
auto shapeSafe = shape.get();
shapeSafe->numSides = 2;
shapeSafe->x = 2;
}
Related
I have the following class:
class Document
{
public:
Document():
// default values for members,
// ...
m_dirty{false}{}
// Accessor functions
template<class OutputStream>
Document& save(OutputStream stream)
{
// Write stuff to `stream`
// ...
m_dirty = false;
return *this;
}
bool dirty() const { return m_dirty; }
private:
Size2d m_canvas_size;
LayerStack m_layers;
LayerIndex m_current_layer;
std::vector<Palette> m_palettes;
PaletteIndex m_current_palette;
ColorIndex m_current_color;
std::vector<std::string> m_palette_names;
std::vector<std::string> m_layer_names;
bool m_dirty;
};
Should the class have public member functions for modifying an element of say m_palettes directly, like
Document& color(PaletteIndex, ColorIndex, Color)
, or is it more "correct", to only allow access to the entire vector, through a pair of API:s
std::vector<Palette> const& palettes();
Document& palettes(std::vector<Palette>&&);
The first option would be more efficient, since it would not require to create a temporary copy of the data member, but consistent use of this design would make the interface bloated. It would require "deep" getters and setters for every container in the class.
Notice the dirty flag. Thus, the following would break the abstraction:
std::vector<Palette>& palettes();
You might have Proxy to "propagate" dirty flag from Palette modification, something like:
template <typename T>
class DirtyProxy
{
T& data;
bool& dirty;
public:
DirtyProxy(T& data, bool& dirty) : data(data), dirty(dirty) {}
~DirtyProxy() { dirty = true;}
DirtyProxy(const DirtyProxy&) = delete;
T* operator ->() { return data; }
};
And then
DirtyProxy<Palette> palette(std::size_t i) { return {m_palettes.at(i), dirty}; }
I think the most robust way to solve it is to use a a callback. An issue with the proxy is that it would not handle the case where the the client code throws an exception (assuming strong exception guarantee). Testcase:
try
{
auto property_proxy = obj.getProperty();
// an exception is thrown here...
property_proxy->val = x; // Never updated
}
catch(...)
{}
assert(!obj.dirty());
will fail, because the dtor always sets the dirty flag. However with a callback
class Foo
{
public:
template<class F>
Foo& modifyInSitu(F&& f)
{
f(x);
m_dirty = true;
return *this
}
};
will only update m_dirty, when f(x) does not throw.
Introduction
I have a data structure : pool of values. (not pool of pointers)
When I called create(), it will return Handle.
Everything is good so far.
template<class T> class Pool{
std::vector<T> v; //store by value
Handle<T> create(){ .... }
}
template<class T> class Handle{
Pool<T>* pool_; //pointer back to container
int pool_index_; //where I am in the container
T* operator->() {
return pool_->v.at(pool_index_); //i.e. "pool[index]"
}
void destroy(){
pool_-> ... destroy(this) .... mark "pool_index_" as unused, etc ....
}
}
Now I want Handle<> to support polymorphism.
Question
Many experts have kindly advised me to use weak_ptr, but I still have been left in blank for a week, don't know how to do it.
The major parts that I stuck are :-
Should create() return weak_ptr, not Handle?
.... or should Handle encapsulate weak_ptr?
If create() return weak_ptr for user's program, ...
how weak_ptr would know pool_index_? It doesn't have such field.
If the user cast weak_ptr/Handle to a parent class pointer as followed, there are many issues :-
e.g.
class B{}
class C : public B { ......
}
....
{
Pool<C> cs;
Handle<C> cPtr=cs.create();
Handle<B> bPtr=cPtr; // casting ;expected to be valid,
// ... but how? (weak_ptr may solve it)
bPtr->destroy() ; // aPtr will invoke Pool<B>::destroy which is wrong!
// Pool<C>::destroy is the correct one
bPtr.operator->() ; // face the same problem as above
}
Assumption
Pool is always deleted after Handle (for simplicity).
no multi-threading
Here are similar questions, but none are close enough.
C++ object-pool that provides items as smart-pointers that are returned to pool upon deletion
C++11 memory pool design pattern?
Regarding weak_ptr
A std::weak_ptr is always associated with a std::shared_ptr. To use weak_ptr you would have to manage your objects with shared_ptr. This would mean ownership of your objects can be shared: Anybody can construct a shared_ptr from a weak_ptr and store it somewhere. The pointed-to object will only get deleted when all shared_ptr's are destroyed. The Pool will lose direct control over object deallocation and thus cannot support a public destroy() function.
With shared ownership things can get really messy.
This is one reason why std::unique_ptr often is a better alternative for object lifetime management (sadly it doesn't work with weak_ptr). Your Handle::destroy() function also implies that this is not what you want and that the Pool alone should handle the lifetime of its objects.
However, shared_ptr/weak_ptr are designed for multi-threaded applications. In a single-threaded environment you can get weak_ptr-like functionality (check for valid targets and avoid dangling pointers) without using weak_ptr at all:
template<class T> class Pool {
bool isAlive(int index) const { ... }
}
template<class T> class Handle {
explicit operator bool() const { return pool_->isAlive(pool_index_); }
}
Why does this only work in a single-threaded environment?
Consider this scenario in a multi-threaded program:
void doSomething(std::weak_ptr<Obj> weak) {
std::shared_ptr<Obj> shared = weak.lock();
if(shared) {
// Another thread might destroy the object right here
// if we didn't have our own shared_ptr<Obj>
shared->doIt(); // And this would crash
}
}
In the above case, we have to make sure that the pointed-to object is still accessible after the if(). We therefore construct a shared_ptr that will keep it alive - no matter what.
In a single-threaded program you don't have to worry about that:
void doSomething(Handle<Obj> handle) {
if(handle) {
// No other threads can interfere
handle->doIt();
}
}
You still have to be careful when dereferencing the handle multiple times. Example:
void doDamage(Handle<GameUnit> source, Handle<GameUnit> target) {
if(source && target) {
source->invokeAction(target);
// What if 'target' reflects some damage back and kills 'source'?
source->payMana(); // Segfault
}
}
But with another if(source) you can now easily check if the handle is still valid!
Casting Handles
So, the template argument T as in Handle<T> doesn't necessarily match the type of the pool. Maybe you could resolve this with template magic. I can only come up with a solution that uses dynamic dispatch (virtual method calls):
struct PoolBase {
virtual void destroy(int index) = 0;
virtual void* get(int index) = 0;
virtual bool isAlive(int index) const = 0;
};
template<class T> struct Pool : public PoolBase {
Handle<T> create() { return Handle<T>(this, nextIndex); }
void destroy(int index) override { ... }
void* get(int index) override { ... }
bool isAlive(int index) const override { ... }
};
template<class T> struct Handle {
PoolBase* pool_;
int pool_index_;
Handle(PoolBase* pool, int index) : pool_(pool), pool_index_(index) {}
// Conversion Constructor
template<class D> Handle(const Handle<D>& orig) {
T* Cannot_cast_Handle = (D*)nullptr;
(void)Cannot_cast_Handle;
pool_ = orig.pool_;
pool_index_ = orig.pool_index_;
}
explicit operator bool() const { return pool_->isAlive(pool_index_); }
T* operator->() { return static_cast<T*>( pool_->get(pool_index_) ); }
void destroy() { pool_->destroy(pool_index_); }
};
Usage:
Pool<Impl> pool;
Handle<Impl> impl = pool.create();
// Conversions
Handle<Base> base = impl; // Works
Handle<Impl> impl2 = base; // Compile error - which is expected
The lines that check for valid conversions are likely to be optimized out. The check will still happen at compile-time! Trying an invalid conversion will give you an error like this:
error: invalid conversion from 'Base*' to 'Impl*' [-fpermissive]
T* Cannot_cast_Handle = (D*)nullptr;
I uploaded a simple, compilable test case here: http://ideone.com/xeEdj5
I have an shared object that need to be send to a system API and extract it back later. The system API receives void * only. I cannot use shared_ptr::get() because it do not increases the reference count and it could be released by other threads before extract from the system API. Sending a newed shared_ptr * will work but involves additional heap allocation.
One way to do it is to let the object derived from enable_shared_from_this. However, because this class template owns only a weak_ptr, it is not enough to keep the object from released.
So my solution looks like the following:
class MyClass:public enable_shared_from_this<MyClass> {
private:
shared_ptr<MyClass> m_this;
public:
void *lock(){
m_this=shared_from_this();
return this;
}
static shared_ptr<MyClass> unlock(void *p){
auto pthis = static_cast<MyClass *>(p);
return move(pthis->m_this);
}
/* ... */
}
/* ... */
autp pobj = make_shared<MyObject>(...);
/* ... */
system_api_send_obj(pobj->lock());
/* ... */
auto punlocked = MyClass::unlock(system_api_reveive_obj());
Are there any easier way to do this?
The downside of this solution:
it requires an additional shared_ptr<MyClass> in the MyClass object layout, in addition of a weak_ptr in the base class enable_shared_from_this.
As I mentioned in the comments, access to lock() and unlock() concurrently is NOT Safe.
The worst thing is that this solution can only support lock() once before a call of unlock(). If the same object is to be use for multiple system API calls, additional reference counting must be implemented.
If we have another enable_lockable_shared_from_this class it will be greate:
class MyClass:public enable_lockable_shared_from_this<MyClass> {
/* ... */
}
/* ... */
autp pobj = make_shared<MyObject>(...);
/* ... */
system_api_send_obj(pobj.lock());
/* ... */
auto punlocked = unlock_shared<MyClass>(system_api_reveive_obj());
And the implementation of enable_lockable_shared_from_this is similar as enable_shared_from_this, the only difference is it implements lock() and a helper function unlock_shared. The calling of those functions only explicitly increase and decrease use_count(). This will be the ideal solution because:
It eliminate the additional space cost
It reuses the facilities existing for shared_ptr to guarantee concurrency safety.
The best thing of this solution is that it supports multiple lock() calls seamlessly.
However, the only biggest downside is: it is not available at the moment!
UPDATE:
At least two answers to this question involves a container of pointers. Please compare those solutions with the following:
class MyClass:public enable_shared_from_this<MyClass> {
private:
shared_ptr<MyClass> m_this;
mutex this_lock; //not necessory for single threading environment
int lock_count;
public:
void *lock(){
lock_guard lck(this_lock); //not necessory for single threading environment
if(!lock_count==0)
m_this=shared_from_this();
return this;
}
static shared_ptr<MyClass> unlock(void *p){
lock_guard lck(this_lock); //not necessory for single threading environment
auto pthis = static_cast<MyClass *>(p);
if(--lock_count>0)
return pthis->m_this;
else {
lock_count=0;
return move(pthis->m_this); //returns nullptr if not previously locked
}
}
/* ... */
}
/* ... */
autp pobj = make_shared<MyObject>(...);
/* ... */
system_api_send_obj(pobj->lock());
/* ... */
auto punlocked = MyClass::unlock(system_api_reveive_obj());
This is absolutely an O(1) vs O(n) (space; time is O(log(n)) or similar, but absolutely greater than O(1) ) game.
I am now have an idea of the following:
template<typename T>
struct locker_helper{
typedef shared_ptr<T> p_t;
typedef typename aligned_storage<sizeof(p_t), alignment_of<p_t>::value>::type s_t;
};
template<typename T> void lock_shared(const shared_ptr<T> &p){
typename locker_helper<T>::s_t value;
new (&value)shared_ptr<T>(p);
}
template<typename T> void unlock_shared(const shared_ptr<T> &p){
typename locker_helper<T>::s_t value
= *reinterpret_cast<const typename locker_helper<T>::s_t *const>(&p);
reinterpret_cast<shared_ptr<T> *>(&value)->~shared_ptr<T>();
}
template<typename T>
void print_use_count(string s, const shared_ptr<T> &p){
cout<<s<<p.use_count()<<endl;
}
int main(int argc, char **argv){
auto pi = make_shared<int>(10);
auto s = "pi use_count()=";
print_use_count(s, pi); //pi use_count()=1
lock_shared(pi);
print_use_count(s, pi);//pi use_count()=2
unlock_shared(pi);
print_use_count(s, pi);//pi use_count()=1
}
and then we can implement the original example as following:
class MyClass:public enable_shared_from_this { /*...*/ };
/* ... */
auto pobj = make_shared<MyClass>(...);
/* ... */
lock_shared(pobj);
system_api_send_obj(pobj.get());
/* ... */
auto preceived =
static_cast<MyClass *>(system_api_reveive_obj())->shared_from_this();
unlock_shared(preceived);
It is easy to implement a enable_lockable_shared_from_this with this idea. However, the above is more generic, allows control things that is not derived from enable_lockable_from_this` template class.
By adjusting the previous answer, I finally get the following solution:
//A wrapper class allowing you to control the object lifetime
//explicitly.
//
template<typename T> class life_manager{
public:
//Prevent polymorphic types for object slicing issue.
//To use with polymorphic class, you need to create
//a container type for storage, and then use that type.
static_assert(!std::is_polymorphic<T>::value,
"Use on polymorphic types is prohibited.");
//Example for support of single variable constructor
//Can be extented to support any number of parameters
//by using varidict template.
template<typename V> static void ReConstruct(const T &p, V &&v){
new (const_cast<T *>(&p))T(std::forward<V>(v));
}
static void RawCopy(T &target, const T &source){
*internal_cast(&target) = *internal_cast(&source);
}
private:
//The standard said that reterinterpret_cast<T *>(p) is the same as
//static_cast<T *>(static_cast<void *>(p)) only when T has standard layout.
//
//Unfortunately shared_ptr do not.
static struct { char _unnamed[sizeof(T)]; } *internal_cast(const T *p){
typedef decltype(internal_cast(nullptr)) raw;
return static_cast<raw>(static_cast<void *>(const_cast<T *>(p)));
}
};
//Construct a new instance of shared_ptr will increase the reference
//count. The original pointer was overridden, so its destructor will
//not run, which keeps the increased reference count after the call.
template<typename T> void lock_shared(const std::shared_ptr<T> &p){
life_manager<shared_ptr<T> >::ReConstruct(p, std::shared_ptr<T>(p));
}
//RawCopy do bit-by-bit copying, bypassing the copy constructor
//so the reference count will not increase. This function copies
//the shared_ptr to a temporary, and so it will be destructed and
//have the reference count decreased after the call.
template<typename T> void unlock_shared(const std::shared_ptr<T> &p){
life_manager<shared_ptr<T> >::RawCopy(std::shared_ptr<T>(), p);
}
This is actually the same as my previous version, however. The only thing I did is to create a more generic solution to control object lifetime explicitly.
According to the standard(5.2.9.13), the static_cast sequence is definitely well defined. Furthermore, the "raw" copy behavior could be undefined but we are explicitly requesting to do so, so the user MUST check the system compatibility before using this facility.
Furthermore, actually only the RawCopy() are required in this example. The introduce of ReConstruct is just for general purpose.
Why not just wrap the void * API in something that tracks the lifetime of that API's references to your object?
eg.
typedef std::shared_ptr<MyClass> MyPtr;
class APIWrapper
{
// could have multiple references to the same thing?
// use multiset instead!
// are references local to transient messages processed in FIFO order?
// use a queue! ... etc. etc.
std::set<MyPtr, SuitableComparator> references_;
public:
void send(MyPtr const &ref)
{
references_.insert(ref);
system_api_send_obj(ref.get());
}
MyPtr receive(void *api)
{
MyPtr ref( static_cast<MyClass *>(api)->shared_from_this() );
references_.erase(ref);
return ref;
}
};
Obviously (hopefully) you know the actual ownership semantics of your API, so the above is just a broad guess.
How about using the following global functions that use pointers-to-smart-pointers.
template<typename T> void *shared_lock(std::shared_ptr<T> &sp)
{
return new std::shared_ptr<T>(sp);
}
template<typename T> std::shared_ptr<T> shared_unlock(void *vp)
{
std::shared_ptr<T> *psp = static_cast<std::shared_ptr<T,D>*>(sp);
std::shared_ptr<T> res(*psp);
delete psp;
return res;
}
You have an extra new/delete but the original type is unmodified. And additionally, concurrency is not an issue, because each call to shared_lock will return a different shared_ptr.
To avoid the new/delete calls you could use an object pool, but I think that it is not worth the effort.
UPDATE:
If you are not going to use multiple threads in this matter, what about the following?
template<typename T> struct SharedLocker
{
std::vector< std::shared_ptr<T> > m_ptrs;
unsigned lock(std::shared_ptr<T> &sp)
{
for (unsigned i = 0; i < m_ptrs.size(); ++i)
{
if (!m_ptrs[i])
{
m_ptrs[i] = sp;
return i;
}
}
m_ptrs.push_back(sp);
return m_ptrs.size() - 1;
}
std::shared_ptr<T> unlock(unsigned i)
{
return std::move(m_ptrs[i]);
}
};
I've changed the void* into an unsigned, but that shouldn't be a problem. You could also use intptr_t, if needed.
Assuming that most of the time there will be only a handful of objects in the vector, maybe even no more than 1, the memory allocations will only happen once, on first insertion. And the rest of the time it will be at zero cost.
I have a container (C++) on which I need to operate in two ways, from different threads: 1) Add and remove elements, and 2) iterate through its members. Clearly, remove element while iteration is happening = disaster. The code looks something like this:
class A
{
public:
...
void AddItem(const T& item, int index) { /*Put item into my_stuff at index*/ }
void RemoveItem(const T& item) { /*Take item out of m_stuff*/ }
const list<T>& MyStuff() { return my_stuff; } //*Hate* this, but see class C
private:
Mutex mutex; //Goes in the *Item methods, but is largely worthless in MyStuff()
list<T> my_stuff; //Just as well a vector or deque
};
extern A a; //defined in the .cpp file
class B
{
...
void SomeFunction() { ... a.RemoveItem(item); }
};
class C
{
...
void IterateOverStuff()
{
const list<T>& my_stuff(a.MyStuff());
for (list<T>::const_iterator it=my_stuff.begin(); it!=my_stuff.end(); ++it)
{
...
}
}
};
Again, B::SomeFunction() and C::IterateOverStuff() are getting called asynchronously. What's a data structure I can use to ensure that during the iteration, my_stuff is 'protected' from add or remove operations?
sounds like a reader/writer lock is needed. Basically, the idea is that you may have 1 or more readers OR a single writer. Never can you have a read and write lock at the same time.
EDIT: An example of usage which I think fits your design involves making a small change. Add an "iterate" function to the class which owns the list and make it templated so you can pass a function/functor to define what to do for each node. Something like this (quick and dirty pseudo code, but you get the point...):
class A {
public:
...
void AddItem(const T& item, int index) {
rwlock.lock_write();
// add the item
rwlock.unlock_write();
}
void RemoveItem(const T& item) {
rwlock.lock_write();
// remove the item
rwlock.unlock_write();
}
template <class P>
void iterate_list(P pred) {
rwlock.lock_read();
std::for_each(my_stuff.begin(), my_stuff.end(), pred);
rwlock.unlock_read();
}
private:
rwlock_t rwlock;
list<T> my_stuff; //Just as well a vector or deque
};
extern A a; //defined in the .cpp file
class B {
...
void SomeFunction() { ... a.RemoveItem(item); }
};
class C {
...
void read_node(const T &element) { ... }
void IterateOverStuff() {
a.iterate_list(boost::bind(&C::read_node, this));
}
};
Another Option would be to make the reader/writer lock publicly accessible and have the caller responsible for correctly using the lock. But that's more error prone.
IMHO it is a mistake to have a private mutex in a data structure class and then write the class methods so that the whole thing is thread safe no matter what the code that calls the methods does. The complexity that is required to do this completely and perfectly is way over the top.
The simpler way is to have a public ( or global ) mutex which the calling code is responsible for locking when it needs to access the data.
Here is my blog article on this subject.
When you return the list, return it enclosed in a class that locks/unlocks the mutex in its constructor/destructor. Something along the lines of
class LockedIterable {
public:
LockedIterable(const list<T> &l, Mutex &mutex) : list_(l), mutex_(mutex)
{lock(mutex);}
LockedIterable(const LockedIterable &other) : list_(other.list_), mutex_(other.mutex_) {
// may be tricky - may be wrap mutex_/list_ in a separate structure and manage it via shared_ptr?
}
~LockedIterable(){unlock(mutex);}
list<T>::iterator begin(){return list_.begin();}
list<T>::iterator end(){return list_.end();}
private:
list<T> &list_;
Mutex &mutex_;
};
class A {
...
LockedIterable MyStuff() { return LockedIterable(my_stuff, mutex); }
};
The tricky part is writing copy constructor so that your mutex does not have to be recursive. Or just use auto_ptr.
Oh, and reader/writer lock is indeed a better solution than mutex here.
Suppose I have an autolocker class which looks something like this:
template <T>
class autolocker {
public:
autolocker(T *l) : lock(l) {
lock->lock();
}
~autolocker() {
lock->unlock();
}
private:
autolocker(const autolocker&);
autolocker& operator=(const autolocker&);
private:
T *lock;
};
Obviously the goal is to be able to use this autolocker with anything that has a lock/unlock method without resorting to virtual functions.
Currently, it's simple enough to use like this:
autolocker<some_lock_t> lock(&my_lock); // my_lock is of type "some_lock_t"
but it is illegal to do:
autolocker lock(&my_lock); // this would be ideal
Is there anyway to get template type deduction to play nice with this (keep in my autolocker is non-copyable). Or is it just easiest to just specify the type?
Yes you can use the scope-guard technique
struct autolocker_base {
autolocker_base() { }
protected:
// ensure users can't copy-as it
autolocker_base(autolocker_base const&)
{ }
autolocker_base &operator=(autolocker_base const&)
{ return *this; }
};
template <T>
class autolocker : public autolocker_base {
public:
autolocker(T *l) : lock(l) {
lock->lock();
}
autolocker(const autolocker& o)
:autolocker_base(o), lock(o.lock)
{ o.lock = 0; }
~autolocker() {
if(lock)
lock->unlock();
}
private:
autolocker& operator=(const autolocker&);
private:
mutable T *lock;
};
Then write a function creating the autolocker
template<typename T>
autolocker<T> makelocker(T *l) {
return autolocker<T>(l);
}
typedef autolocker_base const& autolocker_t;
You can then write it like this:
autolocker_t lock = makelocker(&my_lock);
Once the const reference goes out of scope, the destructor is called. It doesn't need to be virtual. At least GCC optimizes this quite well.
Sadly, this means you have to make your locker-object copyable since you need to return it from the maker function. But the old object won't try to unlock twice, because its pointer is set to 0 when it's copied, so it's safe.
Obviously you can't get away with autolocker being a template, because you want to use it as a type, and templates must be instantiated in order to obtain types.
But type-erasure might be used to do what you want. You turn the class template into a class and its constructor into a member template. But then you'd have to dynamically allocate an inner implementation object.
Better, store a pointer to a function that performs the unlock and let that function be an instance of a template chosen by the templatized constructor. Something along these lines:
// Comeau compiles this, but I haven't tested it.
class autolocker {
public:
template< typename T >
autolocker(T *l) : lock_(l), unlock_(&unlock<T>) { l->lock(); }
~autolocker() { unlock_(lock_); }
private:
autolocker(const autolocker&);
autolocker& operator=(const autolocker&);
private:
typedef void (*unlocker_func_)(void*);
void *lock_;
unlocker_func_ unlock_;
template <typename T>
static void unlock(void* lock) { ((T*)lock)->unlock(); }
};
I haven't actually tried this and the syntax might be wrong (I'm not sure how to take the address of a specific function template instance), but I think this should be doable in principle. Maybe someone comes along and fixes what I got wrong.
I like this a lot more than the scope guard, which, for some reason, I never really liked at all.
I think jwismar is correct and what you want is not possible with C++. However, a similar (not direct analogue) construct is possible with C++0x, using several new features (rvalues/moving and auto variable type):
#include <iostream>
template <typename T>
class autolocker_impl
{
public:
autolocker_impl(T *l) : lock(l) {
lock->lock();
}
autolocker_impl (autolocker_impl&& that)
: lock (that.lock)
{
that.lock = 0;
}
~autolocker_impl() {
if (lock)
lock->unlock();
}
private:
autolocker_impl(const autolocker_impl&);
autolocker_impl& operator=(const autolocker_impl&);
private:
T *lock;
};
template <typename T>
autolocker_impl <T>
autolocker (T* lock)
{
return autolocker_impl <T> (lock);
}
struct lock_type
{
void lock ()
{ std::cout << "locked\n"; }
void unlock ()
{ std::cout << "unlocked\n"; }
};
int
main ()
{
lock_type l;
auto x = autolocker (&l);
}
autolocker is a class template, not a class. Your "this would be ideal" is showing something that doesn't make sense in C++.