How to safely pass objects created by new into constructor - c++

I have a few classes which look like that:
struct equation {};
struct number: equation {
number(int n): value(n) {}
private:
int value;
};
struct operation: equation {
operation(const equation* left, const equation* right)
: left(left), right(right) {}
private:
std::unique_ptr<equation> left, right;
};
They are designed in a way that operation takes ownership on the pointers that are passed into constructor.
My question is how can I modify this class to be able to safely use it in a next manner:
operation op(new number(123), new number(456));
It seems to me that if the first object is created and the second one is not (say exception is thrown from number constructor) then it's a memory leak - nobody will ever delete a pointer to the first number.
What can I do with this situation? I do not want to allocate objects sequentially and delete them if something has failed - it's too verbose.

I do not want to allocate objects sequentially and delete them if something has failed - it's too verbose.
Yes. You just need to apply smart pointer idiom more thoroughly; more precisely, change the parameter type to std::unique_ptr, and use std::make_unique (since C++14) (instead of using new explicitly) to avoid this problem. e.g.
struct operation: equation {
operation(std::unique_ptr<equation> left, std::unique_ptr<equation> right)
: left(std::move(left)), right(std::move(right)) {}
private:
std::unique_ptr<equation> left, right;
};
then
operation op(std::make_unique<number>(123), std::make_unique<number>(456));
Note that the using of std::make_unique is important here, the raw pointer created inside std::make_unique is guaranteed to be managed by the returned std::unique_ptr; even the 2nd std::make_unique fails the
std::unique_ptr created by the 1st std::make_unique will see to it that the pointer it owns is destroyed. And this is also true for the case that the 2nd std::make_unique is invoked first.
Before C++14 you can make your own version of std::make_unique; a basic one is easy to write. Here's a possible implementation.
// note: this implementation does not disable this overload for array types
template<typename T, typename... Args>
std::unique_ptr<T> make_unique(Args&&... args)
{
return std::unique_ptr<T>(new T(std::forward<Args>(args)...));
}

Related

how do I tell an array pointer from a regular pointer

I'm writing a smart pointer like std::shared_ptr since my compiler does not support c++17 and later versions, and I want to support array pointer like:
myptr<char []>(new char[10]);
well, it went actually well, until I got the same problem as old-version std::shared_ptr got:
myptr<char []>(new char);
yeah, it can't tell whether it's a regular pointer or an array pointer, and since my deleter is kind of like:
deleter = [](T *p) {delete[] p;}
which means it just meets the same problem that the old-version std::shared_ptr has.
my array-pointer partial specialization is like:
template <typename T, typename DeleterType>
class myptr<T[], DeleterType> { // DeleterType has a default param in main specialization
// as std::function<void(T*)>
private:
my_ptr_cnt<T, DeleterType> *p; // this is the actual pointer and count maintain class
public:
// constructor
/// \bug here:
my_ptr(T *p, DeleterType deleter=[](T *p) {delete []p;}) :
p(new my_ptr_cnt<T, DeleterType>(p, deleter)) { }
}
You can't. This is one of the many reasons that raw arrays are bad.
What you can do is forbid construction from raw pointer, and rely on make_shared-like construction.

switching to another different custom allocator -> propagate to member fields

I profiled my program, and found that changing from standard allocator to a custom one-frame allocator can remove my biggest bottleneck.
Here is a dummy snippet (coliru link):-
class Allocator{ //can be stack/heap/one-frame allocator
//some complex field and algorithm
//e.g. virtual void* allocate(int amountByte,int align)=0;
//e.g. virtual void deallocate(void* v)=0;
};
template<class T> class MyArray{
//some complex field
Allocator* allo=nullptr;
public: MyArray( Allocator* a){
setAllocator(a);
}
public: void setAllocator( Allocator* a){
allo=a;
}
public: void add(const T& t){
//store "t" in some array
}
//... other functions
};
However, my one-frame allocator has a drawback - user must be sure that every objects allocated by one-frame allocator must be deleted/released at the end of time-step.
Problem
Here is an example of use-case.
I use the one-frame allocator to store temporary result of M3 (overlapping surface from collision detection; wiki link) in Physics Engine.
Here is a snippet.
M1,M2 and M3 are all manifolds, but in different level of detail :-
Allocator oneFrameAllocator;
Allocator heapAllocator;
class M1{}; //e.g. a single-point collision site
class M2{ //e.g. analysed many-point collision site
public: MyArray<M1> m1s{&oneFrameAllocator};
};
class M3{ //e.g. analysed collision surface
public: MyArray<M2> m2s{&oneFrameAllocator};
};
Notice that I set default allocator to be oneFrameAllocator (because it is CPU-saver).
Because I create instance of M1,M2 and M3 only as temporary variables, it works.
Now, I want to cache a new instance of M3 outout_m3=m3; for the next timeStep.
(^ To check whether a collision is just start or just end)
In other words, I want to copy one-frame allocated m3 to heap allocated output_m3 at #3 (shown below).
Here is the game-loop :-
int main(){
M3 output_m3; //must use "heapAllocator"
for(int timeStep=0;timeStep<100;timeStep++){
//v start complex computation #2
M3 m3;
M2 m2;
M1 m1;
m2.m1s.add(m1);
m3.m2s.add(m2);
//^ end complex computation
//output_m3=m3; (change allocator, how? #3)
//.... clean up oneFrameAllocator here ....
}
}
I can't assign output_m3=m3 directly, because output_m3 will copy usage of one-frame allocator from m3.
My poor solution is to create output_m3 from bottom up.
The below code works, but very tedious.
M3 reconstructM3(M3& src,Allocator* allo){
//very ugly here #1
M3 m3New;
m3New.m2s.setAllocator(allo);
for(int n=0;n<src.m2s.size();n++){
M2 m2New;
m2New.m1s.setAllocator(allo);
for(int k=0;k<src.m2s[n].m1s.size();k++){
m2New.m1s.add(src.m2s[n].m1s[k]);
}
m3New.m2s.add(m2New);
}
return m3New;
}
output_m3=reconstructM3(m3,&heapAllocator);
Question
How to switch allocator of an object elegantly (without propagating everything by hand)?
Bounty Description
The answer doesn't need to base on any of my snippet or any Physics thing. My code may be beyond repair.
IMHO, passing type-of-allocator as a class template parameter (e.g. MyArray<T,StackAllocator> ) is undesirable.
I don't mind vtable-cost of Allocator::allocate() and Allocator::deallocate().
I dream for a C++ pattern/tool that can propagate the allocator to members of a class automatically. Perhaps, it is operator=() like MSalters advised, but I can't find a proper way to achieve it.
Reference: After receiving an answer from JaMiT, I found that this question is similar to Using custom allocator for AllocatorAwareContainer data members of a class .
Justification
At its core, this question is asking for a way to use a custom allocator with a multi-level container. There are other stipulations, but after thinking about this, I've decided to ignore some of those stipulations. They seem to be getting in the way of solutions without a good reason. That leaves open the possibility of an answer from the standard library: std::scoped_allocator_adaptor and std::vector.
Perhaps the biggest change with this approach is tossing the idea that a container's allocator needs to be modifiable after construction (toss the setAllocator member). That idea seems questionable in general and incorrect in this specific case. Look at the criteria for deciding which allocator to use:
One-frame allocation requires the object be destroyed by the end of the loop over timeStep.
Heap allocation should be used when one-frame allocation cannot.
That is, you can tell which allocation strategy to use by looking at the scope of the object/variable in question. (Is it inside or outside the loop body?) Scope is known at construction time and does not change (as long as you don't abuse std::move). So the desired allocator is known at construction time and does not change. However, the current constructors do not permit specifying an allocator. That is something to change. Fortunately, such a change is a fairly natural extension of introducing scoped_allocator_adaptor.
The other big change is tossing the MyArray class. Standard containers exist to make your programming easier. Compared to writing your own version, the standard containers are faster to implement (as in, already done) and less prone to error (the standard strives for a higher bar of quality than "works for me this time"). So out with the MyArray template and in with std::vector.
How to do it
The code snippets in this section can be joined into a single source file that compiles. Just skip over my commentary between them. (This is why only the first snippet includes headers.)
Your current Allocator class is a reasonable starting point. It just needs a pair of methods that indicate when two instances are interchangeable (i.e. when both are able to deallocate memory that was allocated by either of them). I also took the liberty of changing amountByte to an unsigned type, since allocating a negative amount of memory does not make sense. (I left the type of align alone though, since there is no indication of what values this would take. Possibly it should be unsigned or an enumeration.)
#include <cstdlib>
#include <functional>
#include <scoped_allocator>
#include <vector>
class Allocator {
public:
virtual void * allocate(std::size_t amountByte, int align)=0;
virtual void deallocate(void * v)=0;
//some complex field and algorithm
// **** Addition ****
// Two objects are considered equal when they are interchangeable at deallocation time.
// There might be a more refined way to define this relation, but without the internals
// of Allocator, I'll go with simply being the same object.
bool operator== (const Allocator & other) const { return this == &other; }
bool operator!= (const Allocator & other) const { return this != &other; }
};
Next up are the two specializations. Their details are outside the scope of the question, though. So I'll just mock up something that will compile (needed since one cannot directly instantiate an abstract base class).
// Mock-up to allow defining the two allocators.
class DerivedAllocator : public Allocator {
public:
void * allocate(std::size_t amountByte, int) override { return std::malloc(amountByte); }
void deallocate(void * v) override { std::free(v); }
};
DerivedAllocator oneFrameAllocator;
DerivedAllocator heapAllocator;
Now we get into the first meaty chunk – adapting Allocator to the standard's expectations. This consists of a wrapper template whose parameter is the type of object being constructed. If you can parse the Allocator requirements, this step is simple. Admitedly, parsing the requirements is not simple since they are designed to cover "fancy pointers".
// Standard interface for the allocator
template <class T>
struct AllocatorOf {
// Some basic definitions:
//Allocator & alloc; // A plain reference is an option if you don't support swapping.
std::reference_wrapper<Allocator> alloc; // Or a pointer if you want to add null checks.
AllocatorOf(Allocator & a) : alloc(a) {} // Note: Implicit conversion allowed
// Maybe this value would come from a helper template? Tough to say, but as long as
// the value depends solely on T, the value can be a static class constant.
static constexpr int ALIGN = 0;
// The things required by the Allocator requirements:
using value_type = T;
// Rebind from other types:
template <class U>
AllocatorOf(const AllocatorOf<U> & other) : alloc(other.alloc) {}
// Pass through to Allocator:
T * allocate (std::size_t n) { return static_cast<T *>(alloc.get().allocate(n * sizeof(T), ALIGN)); }
void deallocate(T * ptr, std::size_t) { alloc.get().deallocate(ptr); }
// Support swapping (helps ease writing a constructor)
using propagate_on_container_swap = std::true_type;
};
// Also need the interchangeability test at this level.
template<class T, class U>
bool operator== (const AllocatorOf<T> & a_t, const AllocatorOf<U> & a_u)
{ return a_t.get().alloc == a_u.get().alloc; }
template<class T, class U>
bool operator!= (const AllocatorOf<T> & a_t, const AllocatorOf<U> & a_u)
{ return a_t.get().alloc != a_u.get().alloc; }
Next up are the manifold classes. The lowest level (M1) does not need any changes.
The mid-levels (M2) need two additions to get the desired results.
The member type allocator_type needs to be defined. Its existence indicates that the class is allocator-aware.
There needs to be a constructor that takes, as parameters, an object to copy and an allocator to use. This makes the class actually allocator-aware. (Potentially other constructors with an allocator parameter would be required, depending on what you actually do with these classes. The scoped_allocator works by automatically appending the allocator to the provided construction parameters. Since the sample code makes copies inside the vectors, a "copy-plus-allocator" constructor is needed.)
In addition, for general use, the mid-levels should get a constructor whose lone parameter is an allocator. For readability, I'll also bring back the MyArray name (but not the template).
The highest level (M3) just needs the constructor taking an allocator. Still, the two type aliases are useful for readability and consistency, so I'll throw them in as well.
class M1{}; //e.g. a single-point collision site
class M2{ //e.g. analysed many-point collision site
public:
using allocator_type = std::scoped_allocator_adaptor<AllocatorOf<M1>>;
using MyArray = std::vector<M1, allocator_type>;
// Default construction still uses oneFrameAllocator, but this can be overridden.
explicit M2(const allocator_type & alloc = oneFrameAllocator) : m1s(alloc) {}
// "Copy" constructor used via scoped_allocator_adaptor
//M2(const M2 & other, const allocator_type & alloc) : m1s(other.m1s, alloc) {}
// You may want to instead delegate to the true copy constructor. This means that
// the m1s array will be copied twice (unless the compiler is able to optimize
// away the first copy). So this would need to be performance tested.
M2(const M2 & other, const allocator_type & alloc) : M2(other)
{
MyArray realloc{other.m1s, alloc};
m1s.swap(realloc); // This is where we need swap support.
}
MyArray m1s;
};
class M3{ //e.g. analysed collision surface
public:
using allocator_type = std::scoped_allocator_adaptor<AllocatorOf<M2>>;
using MyArray = std::vector<M2, allocator_type>;
// Default construction still uses oneFrameAllocator, but this can be overridden.
explicit M3(const allocator_type & alloc = oneFrameAllocator) : m2s(alloc) {}
MyArray m2s;
};
Let's see... two lines added to Allocator (could be reduced to just one), four-ish to M2, three to M3, eliminate the MyArray template, and add the AllocatorOf template. That's not a huge difference. Well, a little more than that count if you want to leverage the auto-generated copy constructor for M2 (but with the benefit of fully supporting the swapping of vectors). Overall, not that drastic a change.
Here is how the code would be used:
int main()
{
M3 output_m3{heapAllocator};
for ( int timeStep = 0; timeStep < 100; timeStep++ ) {
//v start complex computation #2
M3 m3;
M2 m2;
M1 m1;
m2.m1s.push_back(m1); // <-- vector uses push_back() instead of add()
m3.m2s.push_back(m2); // <-- vector uses push_back() instead of add()
//^ end complex computation
output_m3 = m3; // change to heap allocation
//.... clean up oneFrameAllocator here ....
}
}
The assignment seen here preserves the allocation strategy of output_m3 because AllocatorOf does not say to do otherwise. This seems to be what should be the desired behavior, not the old way of copying the allocation strategy. Note that if both sides of an assignment already use the same allocation strategy, it doesn't matter if the strategy is preserved or copied. Hence, existing behavior should be preserved with no need for further changes.
Aside from specifying that one variable uses heap allocation, use of the classes is no messier than it was before. Since it was assumed that at some point there would be a need to specify heap allocation, I don't see why this would be objectionable. Use the standard library – it's there to help.
Since you're aiming at performance, I imply that your classes would not manage the lifetime of allocator itself, and would simply use it's raw pointer. Also, since you're changing storage, copying is inevitable. In this case, all you need is to add a "parametrized copy constructor" to each class, e.g.:
template <typename T> class MyArray {
private:
Allocator& _allocator;
public:
MyArray(Allocator& allocator) : _allocator(allocator) { }
MyArray(MyArray& other, Allocator& allocator) : MyArray(allocator) {
// copy items from "other", passing new allocator to their parametrized copy constructors
}
};
class M1 {
public:
M1(Allocator& allocator) { }
M1(const M1& other, Allocator& allocator) { }
};
class M2 {
public:
MyArray<M1> m1s;
public:
M2(Allocator& allocator) : m1s(allocator) { }
M2(const M2& other, Allocator& allocator) : m1s(other.m1s, allocator) { }
};
This way you can simply do:
M3 stackM3(stackAllocator);
// do processing
M3 heapM3(stackM3, heapAllocator); // or return M3(stackM3, heapAllocator);
to create other-allocator-based copy.
Also, depeding on your actual code structure, you can add some template magic to automate things:
template <typename T> class MX {
public:
MyArray<T> ms;
public:
MX(Allocator& allocator) : ms(allocator) { }
MX(const MX& other, Allocator& allocator) : ms(other.ms, allocator) { }
}
class M2 : public MX<M1> {
public:
using MX<M1>::MX; // inherit constructors
};
class M3 : public MX<M2> {
public:
using MX<M2>::MX; // inherit constructors
};
I realize this isn't the answer to your question - but if you only need the object for the next cycle ( and not future cycles past that ), can you just keep two one-frame allocators destroying them on alternate cycles?
Since you are writing the allocator yourself this could be handled directly in the allocator where the clean-up function knows if this is an even or odd cycle.
Your code would then look something like:
int main(){
M3 output_m3;
for(int timeStep=0;timeStep<100;timeStep++){
oneFrameAllocator.set_to_even(timeStep % 2 == 0);
//v start complex computation #2
M3 m3;
M2 m2;
M1 m1;
m2.m1s.add(m1);
m3.m2s.add(m2);
//^ end complex computation
output_m3=m3;
oneFrameAllocator.cleanup(timestep % 2 == 1); //cleanup odd cycle
}
}

C++ std::move a pointer

I have a C++ framework which I provide to my users, who should use a templated wrapper I wrote with their own implementation as the templated type.
The wrapper acts as an RAII class and it holds a pointer to an implementation of the user's class.
To make the user's code clean and neat (in my opinion) I provide a cast operator which converts my wrapper to the pointer it holds. This way (along with some other overloads) the user can use my wrapper as if it is a pointer (much like a shared_ptr).
I came across a corner case where a user calls a function, which takes a pointer to his implementation class, using std::move on my wrapper. Here's an example of what it looks like:
#include <iostream>
using namespace std;
struct my_interface {
virtual int bar() = 0;
};
template <typename T>
struct my_base : public my_interface {
int bar() { return 4; }
};
struct my_impl : public my_base<int> {};
template <typename T>
struct my_wrapper {
my_wrapper(T* t) {
m_ptr = t;
}
operator T*() {
return m_ptr;
}
private:
T* m_ptr;
};
void foo(my_interface* a) {
std::cout << a->bar() << std::endl;
}
int main()
{
my_impl* impl = new my_impl();
my_wrapper<my_impl> wrapper(impl);
foo(std::move(wrapper));
//foo(wrapper);
return 0;
}
[This is ofcourse just an example of the case, and there are more methods in the wrapper, but I'm pretty sure that don't play a role here in this case]
The user, as would I, expect that if std::move was called on the wrapper, then after the call to foo the wrapper will be empty (or at least modified as if it was moved), but in reality the only method being invoked before foo is the cast operator.
Is there a way to make the call to foo distinguishable between the two calls to foo i.e when calling with and without std::move?
EDIT
Thanks to the Mooing Duck's comment I found a way that my_wrapper knows which call is required, but I'm really not sure this is the best method to go with and will appreciate comments on this as well:
Instead of the previous cast operator use the following two:
operator T*() & {
return m_ptr;
}
operator T*() &&{
//Do something
return m_ptr;
}
now operator T*() && is called when calling with std::move and operator T*() & is called when calling without it.
The user, as would I, expect that if std::move was called on the wrapper, then after the call to foo the wrapper will be empty (or at least modified as if it was moved)
Your expectation is wrong. It will only be modified if a move happens, i.e. if ownership of some kind of resource is transferred. But calling foo doesn't do anything like that, because it just gets access to the pointer held inside the wrapper. Calling std::move doesn't do anything except cast its argument to an rvalue, which doesn't alter it. Some function which accepts an rvalue by reference might modify it, so std::move enables that, but it doesn't do that itself. If you don't pass the rvalue to such a function then no modification takes place.
If you really want to make it empty you can add an overload to do that:
template<typename T>
void foo(my_wrapper<T>&& w) {
foo(static_cast<my_interface*>(w));
w = my_wrapper<T>{}; // leave it empty
}
But ... why? Why should it do that?
The wrapper isn't left empty if you do:
my_wrapper<my_impl> w(new my_impl);
my_wrapper<my_impl> w2 = std::move(w);
And isn't left empty by:
my_wrapper<my_impl> w(new my_impl);
my_wrapper<my_impl> w2;
w2 = std::move(w);
If copying an rvalue wrapper doesn't leave it empty, why should simply accessing its member leave it empty? That makes no sense.
Even if your wrapper has a move constructor and move assignment operator so that the examples above do leave w empty, that still doesn't mean that accessing the member of an rvalue object should modify the object. Why does it make any logical difference whether the operator T* conversion is done to an lvalue or an rvalue?
(Also, are you really sure that having implicit conversions both to and from the wrapped pointer type is a good idea? Hint: it's not a good idea. In general prefer to make your conversions explicit, especially if you're dealing with pointers to dynamically-allocated objects.)

C++ force dynamic allocation with unique_ptr?

I've found out that unique_ptr can point to an already existing object.
For example, I can do this :
class Foo {
public:
Foo(int nb) : nb_(nb) {}
private:
int nb_;
};
int main() {
Foo f1(2);
Foo* ptr1(&f1);
unique_ptr<Foo> s_ptr1(&f1);
return 0;
}
My question is :
If I create a class with unique_ptr< Bar > as data members (where Bar is a class where the copy constructor was deleted) and a constructor that takes pointers as argument, can I prevent the user from passing an already existing object/variable as an argument (in that constructor) (i.e. force him to use the new keyword) ?
Because if he does, I won't be able to guarantee a valide state of my class objects (the user could still modify data members with their address from outside of the class) .. and I can't copy the content of Bar to another memory area.
Example :
class Bar {
public:
Bar(/* arguments */) { /* data members allocation */ }
Bar(Bar const& b) = delete;
/* Other member functions */
private:
/* data members */
};
class Bar_Ptr {
public:
Bar_Ptr(Bar* ptr) {
if (ptr != nullptr) { ptr_ = unique_ptr<Bar> (ptr); }
} /* The user can still pass the address of an already existing Bar ... */
/* Other member functions */
private:
unique_ptr<Bar> ptr_;
};
You can't prevent programmers from doing stupid things. Both std::unique_ptr and std::shared_ptr contain the option to create an instance with an existing ptr. I've even seen cases where a custom deleter is passed in order to prevent deletion. (Shared ptr is more elegant for those cases)
So if you have a pointer, you have to know the ownership of it. This is why I prefer to use std::unique_ptr, std::shared_ptr and std::weak_ptr for the 'owning' pointers, while the raw pointers represent non-owning pointers. If you propagate this to the location where the object is created, most static analyzers can tell you that you have made a mistake.
Therefore, I would rewrite the class Bar_ptr to something like:
class Bar_ptr {
public:
explicit Bar_ptr(std::unique_ptr<Bar> &&bar)
: ptr(std::move(bar)) {}
// ...
}
With this, the API of your class enforces the ownership transfer and it is up to the caller to provide a valid unique_ptr. In other words, you shouldn't worry about passing a pointer which isn't allocated.
No one prevents the caller from writing:
Bar bar{};
Bar_ptr barPtr{std::unique_ptr<Bar>{&bar}};
Though if you have a decent static analyzer or even just a code review I would expect this code from being rejected.
No you can't. You can't stop people from doing stupid stuff. Declare a templated function that returns a new object based on the templated parameter.
I've seen something similar before.
The trick is that you create a function (let's call it make_unique) that takes the object (not pointer, the object, so maybe with an implicit constructor, it can "take" the class constructor arguments) and this function will create and return the unique_ptr. Something like this:
template <class T> std::unique_ptr<T> make_unique(T b);
By the way, you can recommend people to use this function, but no one will force them doing what you recommend...
You cannot stop people from doing the wrong thing. But you can encourage them to do the right thing. Or at least, if they do the wrong thing, make it more obvious.
For example, with Bar, don't let the constructor take naked pointers. Make it take unique_ptrs, either by value or by &&. That way, you force the caller to create those unique_ptrs. You're just moving them into your member variables.
That way, if the caller does the wrong thing, the error is in the caller's code, not yours.

Could shared_from_this be implemented without enable_shared_from_this?

There seem to be some edge-cases when using enabled_shared_from_this. For example:
boost shared_from_this and multiple inheritance
Could shared_from_this be implemented without using enable_shared_from_this? If so, could it be made as fast?
A shared_ptr is 3 things. It is a reference counter, a destroyer and an owned resource.
When you make_shared, it allocates all 3 at once, then constructs them in that one block.
When you create a shared_ptr<T> from a T*, you create the reference counter/destroyer separately, and note that the owned resource is the T*.
The goal of shared_from_this is that we can extract a shared_ptr<T> from a T* basically (under the assumption it exists).
If all shared pointers where created via make_shared, this would be easy (unless you want defined behavior on failure), as the layout is easy.
However, not all shared pointers are created that way. Sometimes you can create a shared pointer to an object that was not created by any std library function, and hence the T* is unrelated to the shared pointer reference counting and destruction data.
As there is no room in a T* or what it points to (in general) to find such constructs, we would have to store it externally, which means global state and thread safety overhead and other pain. This would be a burden on people who do not need shared_from_this, and a performance hit compared to the current state for people who do need it (the mutex, the lookup, etc).
The current design stores a weak_ptr<T> in the enable_shared_from_this<T>. This weak_ptr is initialized whenever make_shared or shared_ptr<T> ctor is called. Now we can create a shared_ptr<T> from the T* because we have "made room" for it in the class by inheriting from enable_shared_from_this<T>.
This is again extremely low cost, and handles the simple cases very well. We end up with an overhead of one weak_ptr<T> over the baseline cost of a T.
When you have two different shared_from_this, their weak_ptr<A> and weak_ptr<B> members are unrelated, so it is ambiguous where you want to store the resulting smart pointer (probably both?). This ambiguity results in the error you see, as it assumes there is exactly one weak_ptr<?> member in one unique shared_from_this<?> and there is actually two.
The linked solution provides a clever way to extend this. It writes enable_shared_from_this_virtual<T>.
Here instead of storing a weak_ptr<T>, we store a weak_ptr<Q> where Q is a virtual base class of enable_shared_from_this_virtual<T>, and does so uniquely in a virtual base class. It then non-virtually overrides shared_from_this and similar methods to provide the same interface as shared_from_this<T> does using the "member pointer or child type shared_ptr constructor", where you split the reference count/destroyer component from the owned resource component, in a type-safe way.
The overhead here is greater than the basic shared_from_this: it has virtual inheritance and forces a virtual destructor, which means the object stores a pointer to a virtual function table, and access to shared_from_this is slower as it requires a virtual function table dispatch.
The advantage is it "just works". There is now one unique shared_from_this<?> in the heirarchy, and you can still get type-safe shared pointers to classes T that inherit from shared_from_this<T>.
Yes, it could use global hash tables of type
unordered_map< T*, weak_ptr<T> >
to perform the lookup of a shared pointer from this.
#include <memory>
#include <iostream>
#include <unordered_map>
#include <cassert>
using namespace std;
template<class T>
struct MySharedFromThis {
static unordered_map<T*, weak_ptr<T> > map;
static std::shared_ptr<T> Find(T* p) {
auto iter = map.find(p);
if(iter == map.end())
return nullptr;
auto shared = iter->second.lock();
if(shared == nullptr)
throw bad_weak_ptr();
return shared;
}
};
template<class T>
unordered_map<T*, weak_ptr<T> > MySharedFromThis<T>::map;
template<class T>
struct MyDeleter {
void operator()(T * p) {
std::cout << "deleter called" << std::endl;
auto& map = MySharedFromThis<T>::map;
auto iter = map.find(p);
assert(iter != map.end());
map.erase(iter);
delete p;
}
};
template<class T>
shared_ptr<T> MyMakeShared() {
auto p = shared_ptr<T>(new T, MyDeleter<T>());
MySharedFromThis<T>::map[p.get()] = p;
return p;
}
struct Test {
shared_ptr<Test> GetShared() { return MySharedFromThis<Test>::Find(this); }
};
int main() {
auto p = MyMakeShared<Test>();
assert(p);
assert(p->GetShared() == p);
}
Live Demo
However, the map has to be updated whenever a shared_ptr is constructed from a T*, and before the deleter is called, costing time. Also, to be thread safe, a mutex would have to guard access to the map, serializing allocations of the same type between threads. So this implementation would not perform as well as enable_shared_from_this.
Update:
Improving on this using the same pointer tricks used by make_shared, here is an implementation which should be just as fast as shared_from_this.
template<class T>
struct Holder {
weak_ptr<T> weak;
T value;
};
template<class T>
Holder<T>* GetHolder(T* p) {
// Scary!
return reinterpret_cast< Holder<T>* >(reinterpret_cast<char*>(p) - sizeof(weak_ptr<T>));
}
template<class T>
struct MyDeleter
{
void operator()(T * p)
{
delete GetHolder(p);
}
};
template<class T>
shared_ptr<T> MyMakeShared() {
auto holder = new Holder<T>;
auto p = shared_ptr<T>(&(holder->value), MyDeleter<T>());
holder->weak = p;
return p;
}
template<class T>
shared_ptr<T> MySharedFromThis(T* self) {
return GetHolder(self)->weak.lock();
}
Live Demo