Is pairing weak_ptr to unique_ptr a good idea? - c++

I know it sound absurd to use weak_ptrs with unique_ptrs, but bear with me please.
I have a set of widgets and animations that act on them. The widgets have a clear owner, who creates and destroys them. All the widgets are created, destroyed, animated in one thread, so one widget can't be destroyed while the animation code is running. As you can see, the widgets are are somehow shared with the animations, but the animation should stop if the widgets gets deleted.
The current approach is to use std::unique_ptr for the owners of the widgets and expose them as raw pointers to the animations. This makes very difficult finding/debugging dangling pointers. One proposal was to change to std::shared_ptr inside the owner class and expose std::weak_ptrs to the animations, but this will add some unwanted/unneeded overhead in the system.
Is it possible (a good idea?) to create some sort of weak_ptr on top of std::unique_ptr that just flags that the pointer was deleted? If yes, can you please suggest me some implementations with minimal overhead for single threaded usage.
EDIT:
One more clarification - the widgets are used in one thread, but the application has multiple threads. Also lots of animations run in parallel and each animation is updated 60 times/second. The overhead from std::shared_ptr/std::weak_ptr comes from the (atomic) counter used inside std::shared_ptr, that actually is not needed in this particular case.
EDIT:
I'm not asking if I can use std::weak_ptr with std::unique_ptr, I know this is not possible. I'm asking if it is a good idea/possible to build something with similar behavior as std::weak_ptr that can be paired with std::unique_ptr

No, you can't use std::weak_ptr with std::unique_ptr. You make it a std::shared_ptr and expose a std::weak_ptr, just like you said.
As far as the overhead of reference counting is concerned, I highly doubt that will be the bottleneck of your application, so profile and worry about that only when it becomes (probably never) so.

Sure, it's a reasonable idea. It provides control over the lifetime of the object while giving the subordinate threads the opportunity of detecting its disappearance.
Of course the lock() method of your weak objects will need to return something that does not itself allow re-sharing.
You can do this by encapsulating existing shared_ptr and weak_ptr objects.
A simple example:
#include <iostream>
#include <memory>
// some type we're going to use for testing
struct Foo {
~Foo() {
std::cout << "Foo destroyed" << std::endl;
}
void use() const {
std::cout << "using Foo" << std::endl;
}
};
// forward declaration
template<class T> struct weak_object_ptr;
// a pointer that keeps the object alive but is not itself copyable
template<class T>
struct keep_alive_ptr
{
// make it moveable
keep_alive_ptr(keep_alive_ptr&&) = default;
keep_alive_ptr& operator=(keep_alive_ptr&&) = default;
// provide accessors
T& operator*() const {
return *_ptr;
}
T* operator->() const {
return _ptr.get();
}
private:
// private constructor - the only way to make one of these is to lock a weak_object_ptr
keep_alive_ptr(std::shared_ptr<T> ptr)
: _ptr { std::move(ptr) }
{}
// non-copyable
keep_alive_ptr(const keep_alive_ptr&) = delete;
keep_alive_ptr& operator=(const keep_alive_ptr&) = delete;
friend weak_object_ptr<T>;
std::shared_ptr<T> _ptr;
};
// a weak reference to our shared object with single point of ownership
template<class T>
struct weak_object_ptr
{
weak_object_ptr(std::weak_ptr<T> w)
: _weak { std::move(w) }
{}
keep_alive_ptr<T> lock() const {
return keep_alive_ptr<T> { _weak.lock() };
}
private:
std::weak_ptr<T> _weak;
};
// a shared object store and lifetime controller
template<class T>
struct object_controller
{
// helpful universal constructor
template<class...Args>
object_controller(Args&&...args)
: _controller { std::make_shared<T>(std::forward<Args>(args)...) }
{}
weak_object_ptr<T> get_weak() const {
return weak_object_ptr<T> { _controller };
}
void reset() {
_controller.reset();
}
private:
std::shared_ptr<T> _controller;
};
// test
using namespace std;
int main(){
auto foo_controller = object_controller<Foo> {};
auto weak1 = foo_controller.get_weak();
auto weak2 = foo_controller.get_weak();
{
auto strong1 = weak1.lock();
strong1->use();
cout << "trying to destroy Foo\n";
foo_controller.reset();
auto strong2 = weak2.lock();
strong2->use();
cout << "strong2 going out of scope\n";
}
return 0;
}
expected output (note that the destruction of Foo takes place as early as it is legally allowed):
using Foo
trying to destroy Foo
using Foo
strong2 going out of scope
Foo destroyed

Related

Is there a C++ smart pointer that could wrap up an object to make it thread safe?

I wanted to ask if there is a smart pointer that could take in any class in its template and then any operations done with such pointer would result in a thread-safe operation.
Basically an idea would be that such pointer would automatically hold an internal lock during a scope and release it when the pointer goes out of scope.
Use case would be for example to pull such pointer from a static, pre-allocated array into some scope and perform thread-safe operations inside that scope on the object itself.
I tried to find a C++ library/feature that could perhaps allow for some thread-safe mutation on objects by wrapping it into a single smart pointer object.
if there is a smart pointer that could take in any class in its template and then any operations done with such pointer would result in a thread-safe operation.
No, there is no such smart pointer in the C++ standard.
I don't think that's possible in the "usual" smart pointer sense, because when doing ptr->something() or (*ptr).something(), the operator-> and operator* methods are called, they return the pointer/reference and then something is invoked, so you don't have any way to know when to unlock the mutex after the operation has been done. This can be worked around through proxy objects, but that's another can of worms, especially when mixed with usage of auto.
Moreover, on a higher level this is rarely a kind of thread-safety guarantee one actually needs. In a codebase of ours someone once wrote a wrapper for std::map with a mutex protecting some common mutation operations; this was eminently useless for several reasons. The most obvious was that operator[] returns a reference anyway (so, you get a reference that may be instantly invalidated by someone else calling e.g. erase()); but most importantly, people did stuff like if (!map.count(key)) { map[key].do_something(); }, ignoring the fact that the result of count became stale immediately.
The takeaway here is that generally mutex-wrapping single operations on an objects doesn't gain you much: to actually work safely in a sane manner usually you need to take a mutex for a longer period, to ensure your code has a consistent snapshot of the protected object state.
A possibility to attack both these problems is to turn the whole thing to a different angle: you may wrap your object in an "escrow" object that forces you to take the mutex to access the data, but also think in terms of "doing all the operations where you need it" in a single "mutex-take". A sketch may be something like:
template<typename T>
class MutexedPtr {
std::mutex mtx;
std::unique_ptr<T> ptr;
public:
MutexedPtr(std::unique_ptr<T> ptr) : ptr(std::move(ptr)) {}
template<typename FnT>
void access(FnT fn) {
std::lock_guard<std::mutex> lk(mtx);
fn(*ptr);
}
};
The usage should be something like:
MutexedPtr<Something> ptr = ...;
...
ptr.access([&](Something &obj) {
// do your stuff with obj while the mutex is taken
});
whether this is something that could be useful to your use case is up to you.
I wanted to ask if there is a smart pointer that could take in any class in its template and then any operations done with such pointer would result in a thread-safe operation.
Yes, that's possible. Here's a simple implementation:
#include <thread>
#include <mutex>
#include <cstdio>
template <class T>
struct SyncronizedPtrImpl {
private:
std::scoped_lock<std::mutex> lock;
T* t;
public:
SyncronizedPtrImpl(std::mutex& mutex, T* t) : lock(mutex), t(t) {}
T* operator->() const { return t; }
};
template <class T>
struct SyncronizedPtr {
private:
std::mutex mutex;
T* p;
public:
SyncronizedPtrImpl<T> operator->() {
return SyncronizedPtrImpl<T>{mutex, p};
}
SyncronizedPtr(T* p) : p(p) {}
~SyncronizedPtr() { delete p; }
};
int main() {
struct Foo {
int val = 0;
};
SyncronizedPtr ptr(new Foo);
std::thread t1([&]{
for (int i = 0; i != 10; ++i) ++ptr->val;
});
std::thread t2([&]{
for (int i = 0; i != 10; ++i) --ptr->val;
});
t1.join();
t2.join();
return ptr->val == 0;
}

in-class in-line non-static field initialization + object pool -> decrease maintainability/readability

To improve performance when creating & destroying object, pooling is a possibility.
In some situation, I don't want to go into low-level techniques like custom allocator or char[].
Another way is to create object pool.
However, this technique doesn't go well with in-class field (inline) initialization.
At first, I didn't think this is a problem at all.
However, the pattern keeps re-appear hundred times, and I think I should have some counter-measure.
Example
Assume that the first version of my program looks like this:-
class Particle{
int lifeTime=100; //<-- inline initialization
//.... some function that modify "lifeTIme"
};
int main(){
auto p1=new Particle();
delete p1;
//... many particle created & deleted randomly ...
};
After the adopt of object pool, my program can be:-
class Particle{
int lifeTime=100; //<---- maintainability issue
void reset(){
lifeTime=100; //<---- maintainability issue
}
};
int main(){
auto* p1=pool.create();
//.... "Particle::reset()" have to be called somewhere.
};
The duplicating code causes some maintainability issue.
Question
How to adopt object-pool to an existing object that has inline-field-initialization without sacrificing code-maintainability and readability?
My current workaround
I usually let the constructor call reset().
class Particle{
int lifeTime;
public: Particle(){
reset(); //<---- call it here, or "pool" call it
}
void reset(){
lifeTime=100;
}
};
Disadvantage: It reduces code-readability comparing to the old inline-initialization:-
int lifeTime=100;
Sorry if it is too beginner question, I am new to C++.
This is a usual usecase for std::unique_ptr<>:
class Base {
static constexpr int lifespan = 100;
int lifetime = lifespan;
public:
void reset() noexcept { lifetime = lifespan; }
}
struct Deleter {
void operator ()(Base* const b) const {
b->reset();
}
};
struct Particle : Base {
// ...
};
struct Pool {
std::unique_ptr<Particle, Deleter> create() {
// ...
}
}
int main() {
// ...
auto p1 = pool.create();
}
The solution to this really depends on the combination of
Why do you need to pool objects?
Why do objects need to have a default lifeTime of 100?
Why do objects need to change their lifeTime?
Why do existing objects obtained from the pool need to have their lifeTime reset to 100.
You have partially answered the first, although I'll bet your stated goal of improving performance is not based on anything other than "you think you need to improve performance". Really, such a goal should be based on measured performance being insufficient, otherwise it is no more than premature optimisation.
In any event, if I assume for sake of discussion that all of my questions above have good answers, I would do the following;
class Particle
{
public:
// member functions that provide functions used by `main()`.
private: // note all the members below are private
Particle();
void reset()
{
lifeTime=100;
};
friend class Pool;
};
class Pool
{
public:
Particle *create()
{
Particle *p;
// obtain an object for p to point at
// that may mean release it from some "pool" or creating a new one
p->reset();
return p;
};
void give_back(Particle *&p)
{
// move the value of p back into whatever the "pool" is
p = NULL; // so the caller has some indication it should not use the object
};
};
int main()
{
// presumably pool is created somehow and visible here
auto* p1=pool.create();
// do things with p1
pool.give_back(p1); // we're done with p1
auto *p2 = pool.create();
// p2 might or might not point at what was previously p1
}
Note that the value 100 only ever appears in the reset() function.
The reason for making constructors private and Pool a friend is to prevent accidental creation of new objects (i.e. to force use of the pool).
Optionally, making Particle::reset() be public allows main() to call p1->reset(), but that is not required. However, all objects when obtained from the pool (whether created fresh or reused) will be reset.
I'd probably also make use of std::unique_ptr<Particle> so the lifetime of objects is properly managed, for example, if you forget to give the object back to the pool. I'll leave implementing that sort of thing as an exercise.

Hint compiler to return a reference making 'auto' behave

(possibly related to How to implement a C++ method that creates a new object, and returns a reference to it which is about something different, but incidentially contains almost exactly the same code)
I would like to return a reference to a static local from a static function. I can get it to work, of course, but it's less pretty than I'd like.
Can this be improved?
The background
I have a couple of classes which don't do much except acquire or initialize a resource in a well-defined manner and reliably, and release it. They don't even need to know an awful lot about the resource themselves, but the user might still want to query some info in some way.
That's of course trivially done:
struct foo { foo() { /* acquire */ } ~foo(){ /* release */ } };
int main()
{
foo foo_object;
// do stuff
}
Trivial. Alternatively, this would work fine as well:
#include <scopeguard.h>
int main
{
auto g = make_guard([](){ /* blah */}, [](){ /* un-blah */ });
}
Except now, querying stuff is a bit harder, and it's less pretty than I like. If you prefer Stroustrup rather than Alexandrescu, you can include GSL instead and use some concoction involving final_act. Whatever.
Ideally, I would like to write something like:
int main()
{
auto blah = foo::init();
}
Where you get back a reference to an object which you can query if you wish to do that. Or ignore it, or whatever. My immediate thought was: Easy, that's just Meyer's Singleton in disguise. Thus:
struct foo
{
//...
static bar& init() { static bar b; return b; }
};
That's it! Dead simple, and perfect. The foo is created when you call init, you get back a bar that you can query for stats, and it's a reference so you are not the owner, and the foo automatically cleans up at the end.
Except...
The issue
Of course it couldn't be so easy, and anyone who has ever used range-based for with auto knows that you have to write auto& if you don't want surprise copies. But alas, auto alone looked so perfectly innocent that I didn't think of it. Also, I'm explicitly returning a reference, so what can auto possibly capture but a reference!
Result: A copy is made (from what? presumably from the returned reference?) which of course has a scoped lifetime. Default copy constructor is invoked (harmless, does nothing), eventually the copy goes out of scope, and contexts are released mid-operation, stuff stops working. At program end, the destructor is called again. Kaboooom. Huh, how did that happen.
The obvious (well, not so obvious in the first second!) solution is to write:
auto& blah = foo::init();
This works, and works fine. Problem solved, except... except it's not pretty and people might accidentially just do it wrong like I did. Can't we do without needing an extra ampersand?
It would probably also work to return a shared_ptr, but that would involve needless dynamic memory allocation and what's worse, it would be "wrong" in my perception. You don't share ownership, you are merely allowed to look at something that someone else owns. A raw pointer? Correct for semantics, but... ugh.
By deleting the copy constructor, I can prevent innocent users from running into the forgot-& trap (this will then cause a compiler error).
That is however still less pretty than I would like. There must be a way of communicating "This return value is to be taken as reference" to the compiler? Something like return std::as_reference(b);?
I had thought about some con trick involving "moving" the object without really moving it, but not only will the compiler almost certainly not let you move a static local at all, but if you manage to do it, you have either changed ownership, or with a "fake move" move-constructor again call the destructor twice. So that's no solution.
Is there a better, prettier way, or do I just have to live with writing auto&?
Something like return std::as_reference(b);?
You mean like std::ref? This returns a std::reference_wrapper<T> of the value you provide.
static std::reference_wrapper<bar> init() { static bar b; return std::ref(b); }
Of course, auto will deduce the returned type to reference_wrapper<T> rather than T&. And while reference_wrapper<T> has an implicit operatorT&, that doesn't mean the user can use it exactly like a reference. To access members, they have to use -> or .get().
That all being said however, I believe your thinking is wrong-headed. The reason is that auto and auto& are something that every C++ programmer needs to learn how to deal with. People aren't going to make their iterator types return reference_wrappers instead of T&. People don't generally use reference_wrapper in that way at all.
So even if you wrap all of your interfaces like this, the user still has to eventually know when to use auto&. So really, the user hasn't gained any safety, outside of your particular APIs.
Forcing the user to capture by reference is a three-step process.
First, make the returned thing non-copyable:
struct bar {
bar() = default;
bar(bar const&) = delete;
bar& operator=(bar const&) = delete;
};
then create a little passthrough function that delivers references reliably:
namespace notstd
{
template<class T>
decltype(auto) as_reference(T& t) { return t; }
}
Then write your static init() function, returning decltype(auto):
static decltype(auto) init()
{
static bar b;
return notstd::as_reference(b);
}
Full demo:
namespace notstd
{
template<class T>
decltype(auto) as_reference(T& t) { return t; }
}
struct bar {
bar() = default;
bar(bar const&) = delete;
bar& operator=(bar const&) = delete;
};
struct foo
{
//...
static decltype(auto) init()
{
static bar b;
return notstd::as_reference(b);
}
};
int main()
{
auto& b = foo::init();
// won't compile == safe
// auto b2 = foo::init();
}
Skypjack noted correctly that init() could be written just as correctly without notstd::as_reference():
static decltype(auto) init()
{
static bar b;
return (b);
}
The parentheses around the return (b) force the compiler to return a reference.
My problem with this approach is that c++ developers are often surprised to learn this, so it could be easily missed by a less experienced code maintainer.
My feeling is that return notstd::as_reference(b); explicitly expresses intent to code maintainers, much as std::move() does.
The best, most idiomatic, readable, unsurprising thing to do would be to =delete the copy constructor and copy assignment operator and just return a reference like everybody else.
But, seeing as you brought up smart pointers...
It would probably also work to return a shared_ptr, but that would involve needless dynamic memory allocation and what's worse, it would be "wrong" in my perception. You don't share ownership, you are merely allowed to look at something that someone else owns. A raw pointer? Correct for semantics, but... ugh.
A raw pointer would be perfectly acceptable here. If you don't like that, you have a number of options following the "pointers" train of thought.
You could use a shared_ptr without dynamic memory, with a custom deleter:
struct foo {
static shared_ptr<bar> init() { static bar b; return { &b, []()noexcept{} }; }
}
Although the caller doesn't "share" ownership, it's not clear what ownership even means when the deleter is a no-op.
You could use a weak_ptr holding a reference to the object managed by the shared_ptr:
struct foo {
static weak_ptr<bar> init() { static bar b; return { &b, []()noexcept{} }; }
}
But considering the shared_ptr destructor is a no-op, this isn't really any different from the previous example, and it just imposes on the user an unnecessary call to .lock().
You could use a unique_ptr without dynamic memory, with a custom deleter:
struct noop_deleter { void operator()() const noexcept {} };
template <typename T> using singleton_ptr = std::unique_ptr<T, noop_deleter>;
struct foo {
static singleton_ptr<bar> init() { static bar b; return { &b, {} }; }
}
This has the benefit of not needing to manage a meaningless reference count, but again the semantic meaning is not a perfect fit: the caller does not assume unique ownership, whatever ownership really means.
In library fundamentals TS v2 you can use observer_ptr, which is just a raw pointer that expresses the intent to be non-owning:
struct foo {
static auto init() { static bar b; return experimental::observer_ptr{&b}; }
}
If you don't like any of these options, you can of course define your own smart pointer type.
In a future standard you may be able to define a "smart reference" that works like reference_wrapper without the .get() by utilising overloaded operator..
If you want to use singleton, use it correctly
class Singleton
{
public:
static Singleton& getInstance() {
static Singleton instance;
return instance;
}
Singleton(const Singleton&) = delete;
Singleton& operator =(const Singleton&) = delete;
private:
Singleton() { /*acquire*/ }
~Singleton() { /*Release*/ }
};
So you cannot create copy
auto instance = Singleton::getInstance(); // Fail
whereras you may use instance.
auto& instance = Singleton::getInstance(); // ok.
But if you want scoped RAII instead of singleton, you may do
struct Foo {
Foo() { std::cout << "acquire\n"; }
~Foo(){ std::cout << "release\n"; }
Foo(const Foo&) = delete;
Foo& operator =(const Foo&) = delete;
static Foo init() { return {}; }
};
With the usage
auto&& foo = Foo::init(); // ok.
And copy is still forbidden
auto foo = Foo::init(); // Fail.
Someone would make a typo in one day and then we may or may not notice it in so many code, although we all know we should use auto& instead of auto.
The most convenient but very dangerous solution is using a derived class, as it breaks the strict-aliasing rule.
struct foo
{
private:
//Make these ref wrapper private so that user can't use it directly.
template<class T>
class Pref : public T {
private:
//Make ctor private so that we can issue a compiler error when
//someone typo an auto.
Pref(const Pref&) = default;
Pref(Pref&&) = default;
Pref& operator=(const Pref&) = default;
Pref& operator=(Pref&&) = default;
};
public:
static Pref<bar>& init() {
static bar b;
return static_cast<Pref<bar>&>(b);
}
///....use Pref<bar>& as well as bar&.
};

Could shared_from_this be implemented without enable_shared_from_this?

There seem to be some edge-cases when using enabled_shared_from_this. For example:
boost shared_from_this and multiple inheritance
Could shared_from_this be implemented without using enable_shared_from_this? If so, could it be made as fast?
A shared_ptr is 3 things. It is a reference counter, a destroyer and an owned resource.
When you make_shared, it allocates all 3 at once, then constructs them in that one block.
When you create a shared_ptr<T> from a T*, you create the reference counter/destroyer separately, and note that the owned resource is the T*.
The goal of shared_from_this is that we can extract a shared_ptr<T> from a T* basically (under the assumption it exists).
If all shared pointers where created via make_shared, this would be easy (unless you want defined behavior on failure), as the layout is easy.
However, not all shared pointers are created that way. Sometimes you can create a shared pointer to an object that was not created by any std library function, and hence the T* is unrelated to the shared pointer reference counting and destruction data.
As there is no room in a T* or what it points to (in general) to find such constructs, we would have to store it externally, which means global state and thread safety overhead and other pain. This would be a burden on people who do not need shared_from_this, and a performance hit compared to the current state for people who do need it (the mutex, the lookup, etc).
The current design stores a weak_ptr<T> in the enable_shared_from_this<T>. This weak_ptr is initialized whenever make_shared or shared_ptr<T> ctor is called. Now we can create a shared_ptr<T> from the T* because we have "made room" for it in the class by inheriting from enable_shared_from_this<T>.
This is again extremely low cost, and handles the simple cases very well. We end up with an overhead of one weak_ptr<T> over the baseline cost of a T.
When you have two different shared_from_this, their weak_ptr<A> and weak_ptr<B> members are unrelated, so it is ambiguous where you want to store the resulting smart pointer (probably both?). This ambiguity results in the error you see, as it assumes there is exactly one weak_ptr<?> member in one unique shared_from_this<?> and there is actually two.
The linked solution provides a clever way to extend this. It writes enable_shared_from_this_virtual<T>.
Here instead of storing a weak_ptr<T>, we store a weak_ptr<Q> where Q is a virtual base class of enable_shared_from_this_virtual<T>, and does so uniquely in a virtual base class. It then non-virtually overrides shared_from_this and similar methods to provide the same interface as shared_from_this<T> does using the "member pointer or child type shared_ptr constructor", where you split the reference count/destroyer component from the owned resource component, in a type-safe way.
The overhead here is greater than the basic shared_from_this: it has virtual inheritance and forces a virtual destructor, which means the object stores a pointer to a virtual function table, and access to shared_from_this is slower as it requires a virtual function table dispatch.
The advantage is it "just works". There is now one unique shared_from_this<?> in the heirarchy, and you can still get type-safe shared pointers to classes T that inherit from shared_from_this<T>.
Yes, it could use global hash tables of type
unordered_map< T*, weak_ptr<T> >
to perform the lookup of a shared pointer from this.
#include <memory>
#include <iostream>
#include <unordered_map>
#include <cassert>
using namespace std;
template<class T>
struct MySharedFromThis {
static unordered_map<T*, weak_ptr<T> > map;
static std::shared_ptr<T> Find(T* p) {
auto iter = map.find(p);
if(iter == map.end())
return nullptr;
auto shared = iter->second.lock();
if(shared == nullptr)
throw bad_weak_ptr();
return shared;
}
};
template<class T>
unordered_map<T*, weak_ptr<T> > MySharedFromThis<T>::map;
template<class T>
struct MyDeleter {
void operator()(T * p) {
std::cout << "deleter called" << std::endl;
auto& map = MySharedFromThis<T>::map;
auto iter = map.find(p);
assert(iter != map.end());
map.erase(iter);
delete p;
}
};
template<class T>
shared_ptr<T> MyMakeShared() {
auto p = shared_ptr<T>(new T, MyDeleter<T>());
MySharedFromThis<T>::map[p.get()] = p;
return p;
}
struct Test {
shared_ptr<Test> GetShared() { return MySharedFromThis<Test>::Find(this); }
};
int main() {
auto p = MyMakeShared<Test>();
assert(p);
assert(p->GetShared() == p);
}
Live Demo
However, the map has to be updated whenever a shared_ptr is constructed from a T*, and before the deleter is called, costing time. Also, to be thread safe, a mutex would have to guard access to the map, serializing allocations of the same type between threads. So this implementation would not perform as well as enable_shared_from_this.
Update:
Improving on this using the same pointer tricks used by make_shared, here is an implementation which should be just as fast as shared_from_this.
template<class T>
struct Holder {
weak_ptr<T> weak;
T value;
};
template<class T>
Holder<T>* GetHolder(T* p) {
// Scary!
return reinterpret_cast< Holder<T>* >(reinterpret_cast<char*>(p) - sizeof(weak_ptr<T>));
}
template<class T>
struct MyDeleter
{
void operator()(T * p)
{
delete GetHolder(p);
}
};
template<class T>
shared_ptr<T> MyMakeShared() {
auto holder = new Holder<T>;
auto p = shared_ptr<T>(&(holder->value), MyDeleter<T>());
holder->weak = p;
return p;
}
template<class T>
shared_ptr<T> MySharedFromThis(T* self) {
return GetHolder(self)->weak.lock();
}
Live Demo

What is the usefulness of `enable_shared_from_this`?

I ran across enable_shared_from_this while reading the Boost.Asio examples and after reading the documentation I am still lost for how this should correctly be used. Can someone please give me an example and explanation of when using this class makes sense.
It enables you to get a valid shared_ptr instance to this, when all you have is this. Without it, you would have no way of getting a shared_ptr to this, unless you already had one as a member. This example from the boost documentation for enable_shared_from_this:
class Y: public enable_shared_from_this<Y>
{
public:
shared_ptr<Y> f()
{
return shared_from_this();
}
}
int main()
{
shared_ptr<Y> p(new Y);
shared_ptr<Y> q = p->f();
assert(p == q);
assert(!(p < q || q < p)); // p and q must share ownership
}
The method f() returns a valid shared_ptr, even though it had no member instance. Note that you cannot simply do this:
class Y: public enable_shared_from_this<Y>
{
public:
shared_ptr<Y> f()
{
return shared_ptr<Y>(this);
}
}
The shared pointer that this returned will have a different reference count from the "proper" one, and one of them will end up losing and holding a dangling reference when the object is deleted.
enable_shared_from_this has become part of C++ 11 standard. You can also get it from there as well as from boost.
from Dr Dobbs article on weak pointers, I think this example is easier to understand (source: http://drdobbs.com/cpp/184402026):
...code like this won't work correctly:
int *ip = new int;
shared_ptr<int> sp1(ip);
shared_ptr<int> sp2(ip);
Neither of the two shared_ptr objects knows about the other, so both will try to release the resource when they are destroyed. That usually leads to problems.
Similarly, if a member function needs a shared_ptr object that owns the object that it's being called on, it can't just create an object on the fly:
struct S
{
shared_ptr<S> dangerous()
{
return shared_ptr<S>(this); // don't do this!
}
};
int main()
{
shared_ptr<S> sp1(new S);
shared_ptr<S> sp2 = sp1->dangerous();
return 0;
}
This code has the same problem as the earlier example, although in a more subtle form. When it is constructed, the shared_ptr object sp1 owns the newly allocated resource. The code inside the member function S::dangerous doesn't know about that shared_ptr object, so the shared_ptr object that it returns is distinct from sp1. Copying the new shared_ptr object to sp2 doesn't help; when sp2 goes out of scope, it will release the resource, and when sp1 goes out of scope, it will release the resource again.
The way to avoid this problem is to use the class template enable_shared_from_this. The template takes one template type argument, which is the name of the class that defines the managed resource. That class must, in turn, be derived publicly from the template; like this:
struct S : enable_shared_from_this<S>
{
shared_ptr<S> not_dangerous()
{
return shared_from_this();
}
};
int main()
{
shared_ptr<S> sp1(new S);
shared_ptr<S> sp2 = sp1->not_dangerous();
return 0;
}
When you do this, keep in mind that the object on which you call shared_from_this must be owned by a shared_ptr object. This won't work:
int main()
{
S *p = new S;
shared_ptr<S> sp2 = p->not_dangerous(); // don't do this
}
Here's my explanation, from a nuts and bolts perspective (top answer didn't 'click' with me). *Note that this is the result of investigating the source for shared_ptr and enable_shared_from_this that comes with Visual Studio 2012. Perhaps other compilers implement enable_shared_from_this differently...*
enable_shared_from_this<T> adds a private weak_ptr<T> instance to T which holds the 'one true reference count' for the instance of T.
So, when you first create a shared_ptr<T> onto a new T*, that T*'s internal weak_ptr gets initialized with a refcount of 1. The new shared_ptr basically backs onto this weak_ptr.
T can then, in its methods, call shared_from_this to obtain an instance of shared_ptr<T> that backs onto the same internally stored reference count. This way, you always have one place where T*'s ref-count is stored rather than having multiple shared_ptr instances that don't know about each other, and each think they are the shared_ptr that is in charge of ref-counting T and deleting it when their ref-count reaches zero.
There is one particular case where I find enable_shared_from_this extremely useful: Thread safety when using asynchronous callback.
Imagine class Client has a member of type AsynchronousPeriodicTimer:
struct AsynchronousPeriodicTimer
{
// call this periodically on some thread...
void SetCallback(std::function<void(void)> callback);
void ClearCallback(); // clears the callback
}
struct Client
{
Client(std::shared_ptr< AsynchronousPeriodicTimer> timer)
: _timer(timer)
{
_timer->SetCallback(
[this]
()
{
assert(this); // what if 'this' is already dead because ~Client() has been called?
std::cout << ++_counter << '\n';
}
);
}
~Client()
{
// clearing the callback is not in sync with the timer, and can actually occur while the callback code is running
_timer->ClearCallback();
}
int _counter = 0;
std::shared_ptr< AsynchronousPeriodicTimer> _timer;
}
int main()
{
auto timer = std::make_shared<AsynchronousPeriodicTimer>();
{
auto client = std::make_shared<Client>(timer);
// .. some code
// client dies here, there is a race between the client callback and the client destructor
}
}
The client class subscribes a callback function to the periodic timer. Once the client object goes out of scope, there is a race condition between the client's callback and the client's destructor. The callback may be invoked with a dangling pointer!
The solution: using enable_shared_from_this to extend the object lifetime for the duration of the callback invocation.
struct Client : std::enable_shared_from_this<Client>
{
Client(std::shared_ptr< AsynchronousPeriodicTimer> timer)
: _timer(timer)
{
}
void Init()
{
auto captured_self = weak_from_this(); // weak_ptr to avoid cyclic references with shared_ptr
_timer->SetCallback(
[captured_self]
()
{
if (auto self = captured_self.lock())
{
// 'this' is guaranteed to be non-nullptr. we managed to promote captured_self to a shared_ptr
std::cout << ++self->_counter << '\n';
}
}
);
}
~Client()
{
// the destructor cannot be called while the callback is running. shared_ptr guarantees this
_timer->ClearCallback();
}
int _counter = 0;
std::shared_ptr< AsynchronousPeriodicTimer> _timer;
}
The mechanism of enable_shared_from_this, combined with the inherent thread safety of std::shared_ptr reference counting, enable us to ensure that the Client object cannot be destructed while the callback code is accessing its internal members.
Note that the Init method is separated from the constructor since the initialization process of enable_shared_from_this is not finalized until the constructor exits. Hence the extra method. It is generally unsafe to subscribe an asynchronous callback from within a constructor since the callback may access uninitialized fields.
Note that using a boost::intrusive_ptr does not suffer from this problem.
This is often a more convenient way to get around this issue.
It's exactly the same in c++11 and later: It is to enable the ability to return this as a shared pointer since this gives you a raw pointer.
in other word, it allows you to turn code like this
class Node {
public:
Node* getParent const() {
if (m_parent) {
return m_parent;
} else {
return this;
}
}
private:
Node * m_parent = nullptr;
};
into this:
class Node : std::enable_shared_from_this<Node> {
public:
std::shared_ptr<Node> getParent const() {
std::shared_ptr<Node> parent = m_parent.lock();
if (parent) {
return parent;
} else {
return shared_from_this();
}
}
private:
std::weak_ptr<Node> m_parent;
};