I have an shared object that need to be send to a system API and extract it back later. The system API receives void * only. I cannot use shared_ptr::get() because it do not increases the reference count and it could be released by other threads before extract from the system API. Sending a newed shared_ptr * will work but involves additional heap allocation.
One way to do it is to let the object derived from enable_shared_from_this. However, because this class template owns only a weak_ptr, it is not enough to keep the object from released.
So my solution looks like the following:
class MyClass:public enable_shared_from_this<MyClass> {
private:
shared_ptr<MyClass> m_this;
public:
void *lock(){
m_this=shared_from_this();
return this;
}
static shared_ptr<MyClass> unlock(void *p){
auto pthis = static_cast<MyClass *>(p);
return move(pthis->m_this);
}
/* ... */
}
/* ... */
autp pobj = make_shared<MyObject>(...);
/* ... */
system_api_send_obj(pobj->lock());
/* ... */
auto punlocked = MyClass::unlock(system_api_reveive_obj());
Are there any easier way to do this?
The downside of this solution:
it requires an additional shared_ptr<MyClass> in the MyClass object layout, in addition of a weak_ptr in the base class enable_shared_from_this.
As I mentioned in the comments, access to lock() and unlock() concurrently is NOT Safe.
The worst thing is that this solution can only support lock() once before a call of unlock(). If the same object is to be use for multiple system API calls, additional reference counting must be implemented.
If we have another enable_lockable_shared_from_this class it will be greate:
class MyClass:public enable_lockable_shared_from_this<MyClass> {
/* ... */
}
/* ... */
autp pobj = make_shared<MyObject>(...);
/* ... */
system_api_send_obj(pobj.lock());
/* ... */
auto punlocked = unlock_shared<MyClass>(system_api_reveive_obj());
And the implementation of enable_lockable_shared_from_this is similar as enable_shared_from_this, the only difference is it implements lock() and a helper function unlock_shared. The calling of those functions only explicitly increase and decrease use_count(). This will be the ideal solution because:
It eliminate the additional space cost
It reuses the facilities existing for shared_ptr to guarantee concurrency safety.
The best thing of this solution is that it supports multiple lock() calls seamlessly.
However, the only biggest downside is: it is not available at the moment!
UPDATE:
At least two answers to this question involves a container of pointers. Please compare those solutions with the following:
class MyClass:public enable_shared_from_this<MyClass> {
private:
shared_ptr<MyClass> m_this;
mutex this_lock; //not necessory for single threading environment
int lock_count;
public:
void *lock(){
lock_guard lck(this_lock); //not necessory for single threading environment
if(!lock_count==0)
m_this=shared_from_this();
return this;
}
static shared_ptr<MyClass> unlock(void *p){
lock_guard lck(this_lock); //not necessory for single threading environment
auto pthis = static_cast<MyClass *>(p);
if(--lock_count>0)
return pthis->m_this;
else {
lock_count=0;
return move(pthis->m_this); //returns nullptr if not previously locked
}
}
/* ... */
}
/* ... */
autp pobj = make_shared<MyObject>(...);
/* ... */
system_api_send_obj(pobj->lock());
/* ... */
auto punlocked = MyClass::unlock(system_api_reveive_obj());
This is absolutely an O(1) vs O(n) (space; time is O(log(n)) or similar, but absolutely greater than O(1) ) game.
I am now have an idea of the following:
template<typename T>
struct locker_helper{
typedef shared_ptr<T> p_t;
typedef typename aligned_storage<sizeof(p_t), alignment_of<p_t>::value>::type s_t;
};
template<typename T> void lock_shared(const shared_ptr<T> &p){
typename locker_helper<T>::s_t value;
new (&value)shared_ptr<T>(p);
}
template<typename T> void unlock_shared(const shared_ptr<T> &p){
typename locker_helper<T>::s_t value
= *reinterpret_cast<const typename locker_helper<T>::s_t *const>(&p);
reinterpret_cast<shared_ptr<T> *>(&value)->~shared_ptr<T>();
}
template<typename T>
void print_use_count(string s, const shared_ptr<T> &p){
cout<<s<<p.use_count()<<endl;
}
int main(int argc, char **argv){
auto pi = make_shared<int>(10);
auto s = "pi use_count()=";
print_use_count(s, pi); //pi use_count()=1
lock_shared(pi);
print_use_count(s, pi);//pi use_count()=2
unlock_shared(pi);
print_use_count(s, pi);//pi use_count()=1
}
and then we can implement the original example as following:
class MyClass:public enable_shared_from_this { /*...*/ };
/* ... */
auto pobj = make_shared<MyClass>(...);
/* ... */
lock_shared(pobj);
system_api_send_obj(pobj.get());
/* ... */
auto preceived =
static_cast<MyClass *>(system_api_reveive_obj())->shared_from_this();
unlock_shared(preceived);
It is easy to implement a enable_lockable_shared_from_this with this idea. However, the above is more generic, allows control things that is not derived from enable_lockable_from_this` template class.
By adjusting the previous answer, I finally get the following solution:
//A wrapper class allowing you to control the object lifetime
//explicitly.
//
template<typename T> class life_manager{
public:
//Prevent polymorphic types for object slicing issue.
//To use with polymorphic class, you need to create
//a container type for storage, and then use that type.
static_assert(!std::is_polymorphic<T>::value,
"Use on polymorphic types is prohibited.");
//Example for support of single variable constructor
//Can be extented to support any number of parameters
//by using varidict template.
template<typename V> static void ReConstruct(const T &p, V &&v){
new (const_cast<T *>(&p))T(std::forward<V>(v));
}
static void RawCopy(T &target, const T &source){
*internal_cast(&target) = *internal_cast(&source);
}
private:
//The standard said that reterinterpret_cast<T *>(p) is the same as
//static_cast<T *>(static_cast<void *>(p)) only when T has standard layout.
//
//Unfortunately shared_ptr do not.
static struct { char _unnamed[sizeof(T)]; } *internal_cast(const T *p){
typedef decltype(internal_cast(nullptr)) raw;
return static_cast<raw>(static_cast<void *>(const_cast<T *>(p)));
}
};
//Construct a new instance of shared_ptr will increase the reference
//count. The original pointer was overridden, so its destructor will
//not run, which keeps the increased reference count after the call.
template<typename T> void lock_shared(const std::shared_ptr<T> &p){
life_manager<shared_ptr<T> >::ReConstruct(p, std::shared_ptr<T>(p));
}
//RawCopy do bit-by-bit copying, bypassing the copy constructor
//so the reference count will not increase. This function copies
//the shared_ptr to a temporary, and so it will be destructed and
//have the reference count decreased after the call.
template<typename T> void unlock_shared(const std::shared_ptr<T> &p){
life_manager<shared_ptr<T> >::RawCopy(std::shared_ptr<T>(), p);
}
This is actually the same as my previous version, however. The only thing I did is to create a more generic solution to control object lifetime explicitly.
According to the standard(5.2.9.13), the static_cast sequence is definitely well defined. Furthermore, the "raw" copy behavior could be undefined but we are explicitly requesting to do so, so the user MUST check the system compatibility before using this facility.
Furthermore, actually only the RawCopy() are required in this example. The introduce of ReConstruct is just for general purpose.
Why not just wrap the void * API in something that tracks the lifetime of that API's references to your object?
eg.
typedef std::shared_ptr<MyClass> MyPtr;
class APIWrapper
{
// could have multiple references to the same thing?
// use multiset instead!
// are references local to transient messages processed in FIFO order?
// use a queue! ... etc. etc.
std::set<MyPtr, SuitableComparator> references_;
public:
void send(MyPtr const &ref)
{
references_.insert(ref);
system_api_send_obj(ref.get());
}
MyPtr receive(void *api)
{
MyPtr ref( static_cast<MyClass *>(api)->shared_from_this() );
references_.erase(ref);
return ref;
}
};
Obviously (hopefully) you know the actual ownership semantics of your API, so the above is just a broad guess.
How about using the following global functions that use pointers-to-smart-pointers.
template<typename T> void *shared_lock(std::shared_ptr<T> &sp)
{
return new std::shared_ptr<T>(sp);
}
template<typename T> std::shared_ptr<T> shared_unlock(void *vp)
{
std::shared_ptr<T> *psp = static_cast<std::shared_ptr<T,D>*>(sp);
std::shared_ptr<T> res(*psp);
delete psp;
return res;
}
You have an extra new/delete but the original type is unmodified. And additionally, concurrency is not an issue, because each call to shared_lock will return a different shared_ptr.
To avoid the new/delete calls you could use an object pool, but I think that it is not worth the effort.
UPDATE:
If you are not going to use multiple threads in this matter, what about the following?
template<typename T> struct SharedLocker
{
std::vector< std::shared_ptr<T> > m_ptrs;
unsigned lock(std::shared_ptr<T> &sp)
{
for (unsigned i = 0; i < m_ptrs.size(); ++i)
{
if (!m_ptrs[i])
{
m_ptrs[i] = sp;
return i;
}
}
m_ptrs.push_back(sp);
return m_ptrs.size() - 1;
}
std::shared_ptr<T> unlock(unsigned i)
{
return std::move(m_ptrs[i]);
}
};
I've changed the void* into an unsigned, but that shouldn't be a problem. You could also use intptr_t, if needed.
Assuming that most of the time there will be only a handful of objects in the vector, maybe even no more than 1, the memory allocations will only happen once, on first insertion. And the rest of the time it will be at zero cost.
Related
I am writing a C++ wrapper around a C library. Here is an example of my strategy.
// header file
class LibrdfUri { // wrapper around librdf.h librdf_uri*
/*
* If the deleter of std::unique_ptr is an empty
* class then it can do some optimizations and
* not actually store the deleter object.
* Otherwise it has to accommodate extra space for
* the deleter, which is unnecessary
* https://stackoverflow.com/questions/61969200/what-is-the-purpose-of-wrapping-this-private-deleter-function-in-a-struct/61969274#61969274
*/
struct deleter {
// turns deleter into a functor. For passing on to unique_ptr
void operator()(librdf_uri *ptr);
};
// automate management of librdf_uri* lifetime
std::unique_ptr<librdf_uri, deleter> librdf_uri_;
public:
LibrdfUri() = default;
explicit LibrdfUri(const std::string& uri); // construct from string
librdf_uri *get(); // returns the underlying raw pointer
};
// implementation
void LibrdfUri::deleter::operator()(librdf_uri *ptr) {
librdf_free_uri(ptr); // this is the C library function for destruction of librdf_uri
}
LibrdfUri::LibrdfUri(const std::string &uri) {
// create pointer to underlying C library 'object'
librdf_uri_ = std::unique_ptr<librdf_uri, deleter>(
librdf_new_uri(World::getWorld(), (const unsigned char *) uri.c_str()) // World::getWorld is static. Returns a pointer required by librdf_new_uri
);
}
librdf_uri *LibrdfUri::get() {
return librdf_uri_.get();
}
// and is used like so:
LibrdfUri uri("http://uri.com");
librdf_uri* curi = uri.get(); // when needed
This works for the single type librdf_uri* which is a part of the underlying library however I have lots of these. My question is double barrelled. The first part concerns the best general strategy for generalizing this wrapper to other classes while the second is concerns the implementation of that strategy.
Regarding the first part, here are my thoughts:
1. I could implement each class manually like I've done here. This is probably the simplest and least elegant. Yet it still might be my best option. However there is a small amount of code duplication involved, since each CWrapper I write will essentially have the same structure. Not to mention if I need to change something then I'll have to do each class individually.
2. Use an base class (abstract?)
3. Use a template
The second part of my question is basically: if I implement either option 2 or 3 (which I think might even be just a single option) how would I do it?
Here is a (vastly broken) version of what I'm thinking:
template<class LibrdfType>
class CWrapper {
struct deleter { ; //?
void operator()(LibrdfType *ptr) {
// ??
};
}
std::unique_ptr<LibrdfType, deleter> ptr;
public:
CWrapper() = default;
LibrdfType *get() {
ptr.get();
};
};
Then, LibrdfUri and any other C class I need to wrap, would just subclass CWrapper
This is a better deleter:
template<auto f>
using deleter=std::integral_constant< std::decay_t<decltype(f)>, f >;
use:
deleter<librdf_free_uri>
is a stateless deleter that calls librdf_free_uri.
But we don't need that I think. Here is what I might do:
There are 3 pieces of information you need.
How to construct
How to destroy
What type to store
One way is to define ADL baser helpers with famous names that you override to delete/construct.
template<class T>struct tag_t{};
template<class T>constexpr tag_t<T> tag{};
template<class T>
void delete_wrapptr(T*)=delete;
struct cleanup_wrapptr{
template<class T>
void operator()(T* t)const{ delete_wrapptr(t); }
};
template<class T>
using wrapptr=std::unique_ptr<T, cleanup_wrapptr>;
template<class T>
wrapptr<T> make_wrapptr( tag_t<T>, ... )=delete;
now you just have to write overloads for make and delete.
void delete_wrapptr(librdf_uri* ptr){
librdf_free_uri(ptr); // this is the C library function for destruction of librdf_uri
}
librdr_uri* make_wrapptr(tag_t<librdf_uri>, const std::string &uri) {
return librdf_new_uri(World::getWorld(), (const unsigned char *) uri.c_str()); // World::getWorld is static. Returns a pointer required by librdf_new_uri
}
and you can;
wrapptr<librdf_uri> ptr = make_wrapptr(tag<librdf_uri>, uri);
the implementation becomes just overriding those two functions.
make_wrapptr and delete_wrapptr overloads you write need to be visible at creating point, and in the namespace of T, tag_t or cleanup_wrapptr. The implementations can be hidden in a cpp file, but the declaration of the overload cannot.
Introduction
I have a data structure : pool of values. (not pool of pointers)
When I called create(), it will return Handle.
Everything is good so far.
template<class T> class Pool{
std::vector<T> v; //store by value
Handle<T> create(){ .... }
}
template<class T> class Handle{
Pool<T>* pool_; //pointer back to container
int pool_index_; //where I am in the container
T* operator->() {
return pool_->v.at(pool_index_); //i.e. "pool[index]"
}
void destroy(){
pool_-> ... destroy(this) .... mark "pool_index_" as unused, etc ....
}
}
Now I want Handle<> to support polymorphism.
Question
Many experts have kindly advised me to use weak_ptr, but I still have been left in blank for a week, don't know how to do it.
The major parts that I stuck are :-
Should create() return weak_ptr, not Handle?
.... or should Handle encapsulate weak_ptr?
If create() return weak_ptr for user's program, ...
how weak_ptr would know pool_index_? It doesn't have such field.
If the user cast weak_ptr/Handle to a parent class pointer as followed, there are many issues :-
e.g.
class B{}
class C : public B { ......
}
....
{
Pool<C> cs;
Handle<C> cPtr=cs.create();
Handle<B> bPtr=cPtr; // casting ;expected to be valid,
// ... but how? (weak_ptr may solve it)
bPtr->destroy() ; // aPtr will invoke Pool<B>::destroy which is wrong!
// Pool<C>::destroy is the correct one
bPtr.operator->() ; // face the same problem as above
}
Assumption
Pool is always deleted after Handle (for simplicity).
no multi-threading
Here are similar questions, but none are close enough.
C++ object-pool that provides items as smart-pointers that are returned to pool upon deletion
C++11 memory pool design pattern?
Regarding weak_ptr
A std::weak_ptr is always associated with a std::shared_ptr. To use weak_ptr you would have to manage your objects with shared_ptr. This would mean ownership of your objects can be shared: Anybody can construct a shared_ptr from a weak_ptr and store it somewhere. The pointed-to object will only get deleted when all shared_ptr's are destroyed. The Pool will lose direct control over object deallocation and thus cannot support a public destroy() function.
With shared ownership things can get really messy.
This is one reason why std::unique_ptr often is a better alternative for object lifetime management (sadly it doesn't work with weak_ptr). Your Handle::destroy() function also implies that this is not what you want and that the Pool alone should handle the lifetime of its objects.
However, shared_ptr/weak_ptr are designed for multi-threaded applications. In a single-threaded environment you can get weak_ptr-like functionality (check for valid targets and avoid dangling pointers) without using weak_ptr at all:
template<class T> class Pool {
bool isAlive(int index) const { ... }
}
template<class T> class Handle {
explicit operator bool() const { return pool_->isAlive(pool_index_); }
}
Why does this only work in a single-threaded environment?
Consider this scenario in a multi-threaded program:
void doSomething(std::weak_ptr<Obj> weak) {
std::shared_ptr<Obj> shared = weak.lock();
if(shared) {
// Another thread might destroy the object right here
// if we didn't have our own shared_ptr<Obj>
shared->doIt(); // And this would crash
}
}
In the above case, we have to make sure that the pointed-to object is still accessible after the if(). We therefore construct a shared_ptr that will keep it alive - no matter what.
In a single-threaded program you don't have to worry about that:
void doSomething(Handle<Obj> handle) {
if(handle) {
// No other threads can interfere
handle->doIt();
}
}
You still have to be careful when dereferencing the handle multiple times. Example:
void doDamage(Handle<GameUnit> source, Handle<GameUnit> target) {
if(source && target) {
source->invokeAction(target);
// What if 'target' reflects some damage back and kills 'source'?
source->payMana(); // Segfault
}
}
But with another if(source) you can now easily check if the handle is still valid!
Casting Handles
So, the template argument T as in Handle<T> doesn't necessarily match the type of the pool. Maybe you could resolve this with template magic. I can only come up with a solution that uses dynamic dispatch (virtual method calls):
struct PoolBase {
virtual void destroy(int index) = 0;
virtual void* get(int index) = 0;
virtual bool isAlive(int index) const = 0;
};
template<class T> struct Pool : public PoolBase {
Handle<T> create() { return Handle<T>(this, nextIndex); }
void destroy(int index) override { ... }
void* get(int index) override { ... }
bool isAlive(int index) const override { ... }
};
template<class T> struct Handle {
PoolBase* pool_;
int pool_index_;
Handle(PoolBase* pool, int index) : pool_(pool), pool_index_(index) {}
// Conversion Constructor
template<class D> Handle(const Handle<D>& orig) {
T* Cannot_cast_Handle = (D*)nullptr;
(void)Cannot_cast_Handle;
pool_ = orig.pool_;
pool_index_ = orig.pool_index_;
}
explicit operator bool() const { return pool_->isAlive(pool_index_); }
T* operator->() { return static_cast<T*>( pool_->get(pool_index_) ); }
void destroy() { pool_->destroy(pool_index_); }
};
Usage:
Pool<Impl> pool;
Handle<Impl> impl = pool.create();
// Conversions
Handle<Base> base = impl; // Works
Handle<Impl> impl2 = base; // Compile error - which is expected
The lines that check for valid conversions are likely to be optimized out. The check will still happen at compile-time! Trying an invalid conversion will give you an error like this:
error: invalid conversion from 'Base*' to 'Impl*' [-fpermissive]
T* Cannot_cast_Handle = (D*)nullptr;
I uploaded a simple, compilable test case here: http://ideone.com/xeEdj5
I tried to create packed array as a data structure for a game engine as described here:-
http://experilous.com/1/blog/post/dense-dynamic-arrays-with-stable-handles-part-1
In short, the structure stores values, instead of pointers.
Here is a draft.
template<class T> class Id{
int id;
}
template<class T> class PackArray{
std::vector <int>indirection ; //promote indirection here
std::vector <T>data;
//... some fields for pooling (for recycling instance of T)
Id<T> create(){
data.push_back(T());
//.... update indirection ...
return Id( .... index , usually = indirection.size()-1 .... )
}
T* get(Id<T> id){
return &data[indirection[id.id]];
//the return result is not stable, caller can't hold it very long
}
//... others function e.g. destroy(Id<T>) ...
}
The prototype works as I wished, but now I concern the beauty of old code.
For example, I had always created a new object like this:-
Bullet* bullet = new Bullet(gameEngine,velocity);
Now I must call :-
Id<Bullet> bullet = getManager()->create()->ini(velocity);
// getManager() usually return PackArray<Bullet>*
// For this data structure,
// if I want to hold the object for a long time, I have to cache it as Id.
Here are the questions :-
The new version of code is more ugly.
Should I avoid it? How to avoid it?
How to avoid / reduce programmer's-work of the above modification?
It is very tedious, when there are many of them scattering around.
(Edit) The scariest part is change in the type declaration e.g.
class Rocket{
std::vector<Bullet*> bullets;
//-> std::vector<Id<Bullet>> bullets;
void somefunction(){
Bullet* bullet = someQuery();
//-> Id<Bullet> bullet
}
}//These changes scatter around many places in many files.
This change (inserting the word "Id<>") means that the game logic has to know the underlying data structure that used to store Bullet.
If the underlying data structure would be changed again in future, I will have to manually refactor them one by one again (from Id<> to something else), i.e. lower maintainability.
(optional) What is the name of this data structure / technique?
As a library, should Id has a field of PackArray* to enable accessing the underlying object (e.g. Bullet*), without manager()?
Bullet* bullet = someId->getUnderlyingObject();
This behaviour of id sounds like handles, as in you don't give out information about the storage method, but guarantee access as long as the handle is valid. In the later respect handles behave like raw pointers: you won't be able to tell if it's valid (at least without the manager) and the handle might be reused at some point.
The question if changing from raw pointers to handles produces uglier code is very opinionated and I'd rather keep this objective: there's a balance between readably explicit and too much typing - everyone draws their own limits here. There's also advantages to having the calling site specify getManager: maybe there are multiple possible instances of these managers, maybe getting the manager requires locking and for multiple operations you want to lock only once. (You can support both of these cases in addition to what I present below.)
Let's use pointer/iterator notation to access the objects through our handles, reducing the amount of code changes necessary. Using std::make_unique and std::make_shared for reference, let's define make_handle to dispatch the creation to the right manager. I've adjusted PackArray::create a bit to make the following example more compact:
template<class T> class Handle;
template<class T> class PackArray;
template<class T, class... Args> Handle<T> make_handle(Args&&... args);
template<class T>
struct details {
friend class Handle<T>;
template<class U, class... Args> friend Handle<U> make_handle(Args&&... args);
private:
// tight control over who get's to access the underlying storage
static PackArray<T>& getManager();
};
template<class T>
class Handle {
friend class PackArray<T>;
size_t id;
public:
// accessors (via the manager)
T& operator*();
T* operator->() { return &*(*this); }
};
template<class T>
class PackArray {
std::vector<size_t> idx;
std::vector<T> data;
public:
template<class... Args>
Handle<T> create(Args&&... args) {
Handle<T> handle;
handle.id = data.size();
idx.push_back(data.size());
// enables non-default constructable types
data.emplace_back(std::forward<Args>(args)...);
return handle;
}
// access using the handle
T& get(Handle<T> handle) {
return data[idx[handle.id]];
}
};
template<class T, class... Args>
Handle<T> make_handle(Args&&... args) {
Handle<T> handle = details<T>::getManager().create(std::forward<Args>(args)...);
return handle;
}
template<class T>
T& Handle<T>::operator*() {
return details<T>::getManager().get(*this);
}
And the usage code would look like:
Handle<int> hIntA = make_handle<int>();
Handle<int> hIntB = make_handle<int>(13);
Handle<float> hFloatA = make_handle<float>(13.37f);
Handle<Bullet> hBulletA = make_handle<Bullet>();
// Accesses through the respective managers
*hIntA = 42; // assignment
std::cout << *hIntB; // prints 13
float foo = (*hFloatA + 12.26f) * 0.01;
applyDamage(hBulletA->GetDmgValue());
Every type needs a manager, i.e. if you don't define a default you'll get a compiler error. Alternatively you can provide a generic implementation (note: the initialisation of instance is not thread safe!):
template<class T>
PackArray<T>& details<T>::getManager() {
static PackArray<T> instance;
return instance;
}
You get special behaviour via template specialisation. You can even replace the manager type via template specialisation, allowing you to easily compare storage strategies (e.g. SOA vs. AOS).
template<>
struct details<Bullet> {
friend class Handle<Bullet>;
template<class U, class... Args> friend Handle<U> make_handle(Args&&... args);
private:
static MyBulletManager& getManager() {
static MyBulletManager instance;
std::cout << "special bullet store" << std::endl;
return instance;
}
};
And you can even make all of this const-correct (the same techniques as implementing custom iterators apply).
You may even want to extend the details<T> to a full traits type... It's all a balance between generalisation and complexity.
Ok, the question title is a bit hard to phrase. What I am trying to achieve is create a template class with get/set functions that can handle simple types and structures.
This is simple for types such as integers and char, etc... But when the template type 'T' is a struct then it gets harder.
For example, here is a template class, where I have omitted various parts of it (such as constructor, etc), but it shows the get/set function:
EDIT: Only this class is allowed to modify the data, so passing a reference outside is not allowed. The reason is that I want to do a mutex around the set/get. I will/have update the functions...
template <class T> class storage
{
private:
T m_var;
pthread_mutex_t m_mutex;
public:
void set(T value)
{
pthread_mutex_lock(&m_mutex);
m_var = value;
pthread_mutex_unlock(&m_mutex);
}
T get(void)
{
T tmp;
// Note: Can't return the value within the mutex otherwise we could get into a deadlock. So
// we have to first read the value into a temporary variable and then return that.
pthread_mutex_lock(&m_mutex);
tmp = m_var;
pthread_mutex_unlock(&m_mutex);
return tmp;
}
};
Then consider the following code:
struct shape_t
{
int numSides;
int x;
int y;
}
int main()
{
storage<int> intStore;
storage<shape_t> shapeStore;
// To set int value I can do:
intStore.set(2);
// To set shape_t value I can do:
shape_t tempShape;
tempShape.numSides = 2;
tempShape.x = 5;
tempShape.y = 4;
shapeStore.set(tempShape);
// To modify 'x' (and keep y and numSides the same) I have to do:
shape_t tempShape = shapeStore.get();
tempShape.x = 5;
shapeStore.set(tempShape);
}
What I want to be able to do, if its possible, is to set the members of shape_t individually via some means in the template class, something like:
shapeStore.set(T::numSides, 2);
shapeStore.set(T::x, 5);
shapeStore.set(T::y, 4);
And not have to use a temp var. Is this possible? how?
I looked at this answer, but it did not quite do what I wanted because it is for a specific structure type
Make your get() member return a reference:
T& get()
{
return m_var;
}
Then you could say
shapeStore.get().x = 42;
Note it is good practice to add a const overload:
const T& get() const
{
return m_var;
}
Also note that if your get and set methods really do nothing special, as in your example, you might consider making the data public and doing away with getters/setters:
template <class T> struct storage
{
T m_var;
};
Edit: If you want to allow synchronised changes to the member, an option is to have a method that takes a modifying function. The function is applied inside the class, in your case, protected by the mutex. For example,
template <class T> struct storage
{
storage() : m_var() {}
void do_stuff(std::function<void(T&)> f)
{
std::lock_guard<std::mutex> lock(m_mutex);
f(m_var);
}
private:
T m_var;
std::mutex_t m_mutex;
};
Then you can modify members in a synchronised manner:
storage<shape_t> shapeStore;
shapeStore.do_stuff([](shape_t& s)
{ s.x = 42;
s.y = 100; });
If you don't have C++11 you can pass a function instead:
void foo(shape_t& s) { s.x = 42; }
shapeStore.do_stuff(foo);
Your design is fairly workable for primitive types, but it requires you to replicate the entire interface of class types and quickly becomes unmanageable. Even in the case of primitive types, you might want to enable more complex atomic operations than simply get and set, e.g., increment or add or multiply. The key to simplifying the design is to realize that you don't actually want to interpose on every single operation the client code performs on the data object, you only need to interpose before and after the client code atomically performs a sequence of operations.
Anthony Williams wrote a great article in Doctor Dobb's Journal years ago about this exact problem using a design where the manager object provides a handle to the client code that the client uses to access the managed object. The manager interposes only on the handle creation and destruction allowing clients with a handle unfettered access to the managed object. (See the recent proposal for standardization for excruciating detail.)
You could apply the approach to your problem fairly easily. First, I'll replicate some parts of the C++11 threads library because they make it MUCH easier to write correct code in the presence of exceptions:
class mutex {
pthread_mutex_t m_mutex;
// Forbid copy/move
mutex(const mutex&); // C++11: = delete;
mutex& operator = (const mutex&); // C++11: = delete;
public:
mutex(pthread_mutex_) { pthread_mutex_init(&m_mutex, NULL); }
~mutex() { pthread_mutex_destroy(&m_mutex); }
void lock() { pthread_mutex_lock(&m_mutex); }
void unlock() { pthread_mutex_unlock(&m_mutex); }
bool try_lock() { return pthread_mutex_trylock(&m_mutex) == 0; }
};
class lock_guard {
mutex& mtx;
public:
lock_guard(mutex& mtx_) : mtx(mtx_) { mtx.lock(); }
~lock_guard() { mtx.unlock(); }
};
The class mutex wraps up a pthread_mutex_t concisely. It handles creation and destruction automatically, and saves our poor fingers some keystrokes. lock_guard is a handy RAII wrapper that automatically unlocks the mutex when it goes out of scope.
storage then becomes incredibly simple:
template <class> class handle;
template <class T> class storage
{
private:
T m_var;
mutex m_mutex;
public:
storage() : m_var() {}
storage(const T& var) : m_var(var) {}
friend class handle<T>;
};
It's simply a box with a T and a mutex inside. storage trusts the handle class to be friendly and allows it poke around its insides. It should be clear that storage does not directly provide any access to m_var, so the only way it could possibly be modified is via a handle.
handle is a bit more complex:
template <class T>
class handle {
T& m_data;
lock_guard m_lock;
public:
handle(storage<T>& s) : m_data(s.m_var), m_lock(s.m_mutex) {}
T& operator* () const {
return m_data;
}
T* operator -> () const {
return &m_data;
}
};
it keeps a reference to the data item and holds one of those handy automatic lock objects. The use of operator* and operator-> make handle objects behave like a pointer to T.
Since only way to access the object inside storage is through a handle, and a handle guarantees that the appropriate mutex is held during its lifetime, there's no way for client code to forget to lock the mutex, or to accidentally access the stored object without locking the mutex. It can't even forget to unlock the mutex, which is nice as well. Usage is simple (See it working live at Coliru):
storage<int> foo;
void example() {
{
handle<int> p(foo);
// We have exclusive access to the stored int.
*p = 42;
}
// other threads could access foo here.
{
handle<int> p(foo);
// We have exclusive access again.
*p *= 12;
// We can safely return with the mutex held,
// it will be unlocked for us in the handle destructor.
return ++*p;
}
}
You would code the program in the OP as:
struct shape_t
{
int numSides;
int x;
int y;
};
int main()
{
storage<int> intStore;
storage<shape_t> shapeStore;
// To set int value I can do:
*handle<int>(intStore) = 2;
{
// To set shape_t value I can do:
handle<shape_t> ptr(shapeStore);
ptr->numSides = 2;
ptr->x = 5;
ptr->y = 4;
}
// To modify 'x' (and keep y and numSides the same) I have to do:
handle<shape_t>(shapeStore)->x = 5;
}
I can propose you alternative solution.
When you need you can get special template class that allows managing containing object.
template < typename T >
class SafeContainer
{
public:
// Variadic template for constructor
template<typename ... ARGS>
SafeContainer(ARGS ...arguments)
: m_data(arguments ...)
{};
// RAII mutex
class Accessor
{
public:
// lock when created
Accessor(SafeContainer<T>* container)
:m_container(container)
{
m_container->m_mutex.lock();
}
// Unlock when destroyed
~Accessor()
{
m_container->m_mutex.unlock();
}
// Access methods
T* operator -> ()
{
return &m_container->m_data;
}
T& operator * ()
{
return m_container->data;
}
private:
SafeContainer<T> *m_container;
};
friend Accessor;
Accessor get()
{
return Accessor(this);
}
private:
T m_data;
// Should be using recursive mutex to avoid deadlocks
std::mutex m_mutex;
};
Example:
struct shape_t
{
int numSides;
int x;
int y;
};
int main()
{
SafeContainer<shape_t> shape;
auto shapeSafe = shape.get();
shapeSafe->numSides = 2;
shapeSafe->x = 2;
}
Is there a way to code a write-only reference to an object? For example, suppose there was a mutex class:
template <class T> class mutex {
protected:
T _data;
public:
mutex();
void lock(); //locks the mutex
void unlock(); //unlocks the mutex
T& data(); //returns a reference to the data, or throws an exception if lock is unowned
};
Is there a way to guarantee that one couldn't do this:
mutex<type> foo;
type& ref;
foo.lock();
foo.data().do_stuff();
ref = foo.data();
foo.unlock();
//I have a unguarded reference to foo now
On the other hand, is it even worth it? I know that some people assume that programmers won't deliberately clobber the system, but then, why do we have private variables in the first place, eh? It'd be nice to just say it's "Undefined Behavior", but that just seems a little bit too insecure.
EDIT: OK, i understand the idea of a setter routine, but how would this be accomplished?
mutex<vector<int> > foo;
foo.lock();
for (int i=0; i < 10; i++) {
foo.data().push_back(i);
}
foo.unlock();
Using a set routine would require a copy for each write:
mutex<vector<int> > foo;
foo.lock();
for (int i=0; i < 10; i++) {
vector<int> copy = foo.read();
copy.push_back(i);
foo.write(copy);
}
though you could trivially optimize in this specific case, if, say, several different threads are all pushing elements, and maybe even erasing a few, this can become quite a bit of excess memory copying (i.e. one per critical section).
Yes, you can create a wrapper class that becomes invalidated when unlock is called and return the wrapper, instead of returning the reference, and you can overload its assignment operator to assign to the reference. The trick is that you need to hang onto a reference to the wrapper's internal data, so that when unlock is called, prior to releasing the lock, you invalidate any wrappers that you have created.
The common way to differentiate between getters and setters is by the const-ness of the object:
template <class T> class mutex {
public:
mutex();
void lock();
void unlock();
T& data(); // cannot be invoked for const objects
const T& data() const; // can be invoked for const objects
protected:
T _data;
};
Now, if you want to have read-only access, make the mutex const:
void read_data(const mutex< std::vector<int> >& data)
{
// only const member functions can be called here
}
You can bind a non-const object to a const reference:
// ...
mutex< std::vector<int> > data;
data.lock();
read_data(data);
data.unlock();
// ...
Note that the lock() and unlock() functions are inherently unsafe in the face of exceptions:
void f(const mutex< std::vector<int> >& data)
{
data.lock();
data.data().push_back(42); // might throw exception
data.unlock(); // will never be reached in push_back() throws
}
The usual way to solve this is RAII (resource acquisition is initialization):
template <class T> class lock;
template <class T> class mutex {
public:
mutex();
protected:
T _data;
private:
friend class lock<T>;
T& data();
void lock();
void unlock();
};
template <class T> class lock {
public:
template <class T> {
lock(mutex<T>& m) m_(m) {m_.lock();}
~lock() {m_.unlock();}
T& data() {return m_.data();}
const T& data() const {return m_.data()}
private:
mutex<T>& m_;
};
Note that I have also moved the accessor functions to the lock class, so that there is no way to access unlocked data.
You can use this like this:
void f(const mutex< std::vector<int> >& data)
{
{
lock< std::vector<int> > lock_1(data);
std::cout << lock1.data()[0]; // fine, too
lock1.data().push_back(42); // fine
}
{
const lock< std::vector<int> > lock_2(data); // note the const
std::cout << lock1.data()[0]; // fine, too
// lock1.data().push_back(42); // compiler error
}
}
You could encapsulate the data as private and expose a write routine. Within that routine you could lock your mutex, giving you similar behavior to what you are shooting for.
You can use a member function as the following:
void set_data(const T& var);
This is how write-only access is applied in C++.
No. There is no way to guarantee anything about reading and writing memory in unsafe languages like C++, where all the memory is treated like one big array.
[Edit] Not sure why all the downvotes; this is correct and relevant.
In safe languages, such as Java or C#, you can certainly guarantee that, for instance, properly-implemented immutable types will stay immutable. Such a guarantee can never be made in C++.
The fear is not so much malicious users as it is accidental invalid-pointers; I have worked on C++ projects where immutable types have been mutated due to an invalid pointer in completely unrelated code, causing bugs that are extremely difficult to track down. This guarantee - which only safe languages can make - is both useful and important.