What are some ways you can shoot yourself in the foot when using boost::shared_ptr? In other words, what pitfalls do I have to avoid when I use boost::shared_ptr?
Cyclic references: a shared_ptr<> to something that has a shared_ptr<> to the original object. You can use weak_ptr<> to break this cycle, of course.
I add the following as an example of what I am talking about in the comments.
class node : public enable_shared_from_this<node> {
public :
void set_parent(shared_ptr<node> parent) { parent_ = parent; }
void add_child(shared_ptr<node> child) {
children_.push_back(child);
child->set_parent(shared_from_this());
}
void frob() {
do_frob();
if (parent_) parent_->frob();
}
private :
void do_frob();
shared_ptr<node> parent_;
vector< shared_ptr<node> > children_;
};
In this example, you have a tree of nodes, each of which holds a pointer to its parent. The frob() member function, for whatever reason, ripples upwards through the tree. (This is not entirely outlandish; some GUI frameworks work this way).
The problem is that, if you lose reference to the topmost node, then the topmost node still holds strong references to its children, and all its children also hold a strong reference to their parents. This means that there are circular references keeping all the instances from cleaning themselves up, while there is no way of actually reaching the tree from the code, this memory leaks.
class node : public enable_shared_from_this<node> {
public :
void set_parent(shared_ptr<node> parent) { parent_ = parent; }
void add_child(shared_ptr<node> child) {
children_.push_back(child);
child->set_parent(shared_from_this());
}
void frob() {
do_frob();
shared_ptr<node> parent = parent_.lock(); // Note: parent_.lock()
if (parent) parent->frob();
}
private :
void do_frob();
weak_ptr<node> parent_; // Note: now a weak_ptr<>
vector< shared_ptr<node> > children_;
};
Here, the parent node has been replaced by a weak pointer. It no longer has a say in the lifetime of the node to which it refers. Thus, if the topmost node goes out of scope as in the previous example, then while it holds strong references to its children, its children don't hold strong references to their parents. Thus there are no strong references to the object, and it cleans itself up. In turn, this causes the children to lose their one strong reference, which causes them to clean up, and so on. In short, this wont leak. And just by strategically replacing a shared_ptr<> with a weak_ptr<>.
Note: The above applies equally to std::shared_ptr<> and std::weak_ptr<> as it does to boost::shared_ptr<> and boost::weak_ptr<>.
Creating multiple unrelated shared_ptr's to the same object:
#include <stdio.h>
#include "boost/shared_ptr.hpp"
class foo
{
public:
foo() { printf( "foo()\n"); }
~foo() { printf( "~foo()\n"); }
};
typedef boost::shared_ptr<foo> pFoo_t;
void doSomething( pFoo_t p)
{
printf( "doing something...\n");
}
void doSomethingElse( pFoo_t p)
{
printf( "doing something else...\n");
}
int main() {
foo* pFoo = new foo;
doSomething( pFoo_t( pFoo));
doSomethingElse( pFoo_t( pFoo));
return 0;
}
Constructing an anonymous temporary shared pointer, for instance inside the arguments to a function call:
f(shared_ptr<Foo>(new Foo()), g());
This is because it is permissible for the new Foo() to be executed, then g() called, and g() to throw an exception, without the shared_ptr ever being set up, so the shared_ptr does not have a chance to clean up the Foo object.
Be careful making two pointers to the same object.
boost::shared_ptr<Base> b( new Derived() );
{
boost::shared_ptr<Derived> d( b.get() );
} // d goes out of scope here, deletes pointer
b->doSomething(); // crashes
instead use this
boost::shared_ptr<Base> b( new Derived() );
{
boost::shared_ptr<Derived> d =
boost::dynamic_pointer_cast<Derived,Base>( b );
} // d goes out of scope here, refcount--
b->doSomething(); // no crash
Also, any classes holding shared_ptrs should define copy constructors and assignment operators.
Don't try to use shared_from_this() in the constructor--it won't work. Instead create a static method to create the class and have it return a shared_ptr.
I've passed references to shared_ptrs without trouble. Just make sure it's copied before it's saved (i.e., no references as class members).
Here are two things to avoid:
Calling the get() function to get the raw pointer and use it after the pointed-to object goes out of scope.
Passing a reference of or a raw pointer to a shared_ptr should be dangerous too, since it won't increment the internal count which helps keep the object alive.
We debug several weeks strange behavior.
The reason was:
we passed 'this' to some thread workers instead of 'shared_from_this'.
Not precisely a footgun, but certainly a source of frustration until you wrap your head around how to do it the C++0x way: most of the predicates you know and love from <functional> don't play nicely with shared_ptr. Happily, std::tr1::mem_fn works with objects, pointers and shared_ptrs, replacing std::mem_fun, but if you want to use std::negate, std::not1, std::plus or any of those old friends with shared_ptr, be prepared to get cozy with std::tr1::bind and probably argument placeholders as well. In practice this is actually a lot more generic, since now you basically end up using bind for every function object adaptor, but it does take some getting used to if you're already familiar with the STL's convenience functions.
This DDJ article touches on the subject, with lots of example code. I also blogged about it a few years ago when I first had to figure out how to do it.
Using shared_ptr for really small objects (like char short) could be an overhead if you have a lot of small objects on heap but they are not really "shared". boost::shared_ptr allocates 16 bytes for every new reference count it creates on g++ 4.4.3 and VS2008 with Boost 1.42. std::tr1::shared_ptr allocates 20 bytes. Now if you have a million distinct shared_ptr<char> that means 20 million bytes of your memory are gone in holding just count=1. Not to mention the indirection costs and memory fragmentation. Try with the following on your favorite platform.
void * operator new (size_t size) {
std::cout << "size = " << size << std::endl;
void *ptr = malloc(size);
if(!ptr) throw std::bad_alloc();
return ptr;
}
void operator delete (void *p) {
free(p);
}
Giving out a shared_ptr< T > to this inside a class definition is also dangerous.
Use enabled_shared_from_this instead.
See the following post here
You need to be careful when you use shared_ptr in multithread code. It's then relatively easy to become into a case when couple of shared_ptrs, pointing to the same memory, is used by different threads.
The popular widespread use of shared_ptr will almost inevitably cause unwanted and unseen memory occupation.
Cyclic references are a well known cause and some of them can be indirect and difficult to spot especially in complex code that is worked on by more than one programmer; a programmer may decide than one object needs a reference to another as a quick fix and doesn't have time to examine all the code to see if he is closing a cycle. This hazard is hugely underestimated.
Less well understood is the problem of unreleased references. If an object is shared out to many shared_ptrs then it will not be destroyed until every one of them is zeroed or goes out of scope. It is very easy to overlook one of these references and end up with objects lurking unseen in memory that you thought you had finished with.
Although strictly speaking these are not memory leaks (it will all be released before the program exits) they are just as harmful and harder to detect.
These problems are the consequences of expedient false declarations: 1. Declaring what you really want to be single ownership as shared_ptr. scoped_ptr would be correct but then any other reference to that object will have to be a raw pointer, which could be left dangling. 2. Declaring what you really want to be a passive observing reference as shared_ptr. weak_ptr would be correct but then you have the hassle of converting it to share_ptr every time you want to use it.
I suspect that your project is a fine example of the kind of trouble that this practice can get you into.
If you have a memory intensive application you really need single ownership so that your design can explicitly control object lifetimes.
With single ownership opObject=NULL; will definitely delete the object and it will do it now.
With shared ownership spObject=NULL; ........who knows?......
If you have a registry of the shared objects (a list of all active instances, for example), the objects will never be freed. Solution: as in the case of circular dependency structures (see Kaz Dragon's answer), use weak_ptr as appropriate.
Smart pointers are not for everything, and raw pointers cannot be eliminated
Probably the worst danger is that since shared_ptr is a useful tool, people will start to put it every where. Since plain pointers can be misused, the same people will hunt raw pointers and try to replace them with strings, containers or smart pointers even when it makes no sense. Legitimate uses of raw pointers will become suspect. There will be a pointer police.
This is not only probably the worst danger, it may be the only serious danger. All the worst abuses of shared_ptr will be the direct consequence of the idea that smart pointers are superior to raw pointer (whatever that means), and that putting smart pointers everywhere will make C++ programming "safer".
Of course the mere fact that a smart pointer needs to be converted to a raw pointer to be used refutes this claim of the smart pointer cult, but the fact that the raw pointer access is "implicit" in operator*, operator-> (or explicit in get()), but not implicit in an implicit conversion, is enough to give the impression that this is not really a conversion, and that the raw pointer produced by this non-conversion is an harmless temporary.
C++ cannot be made a "safe language", and no useful subset of C++ is "safe"
Of course the pursuit of a safe subset ("safe" in the strict sense of "memory safe", as LISP, Haskell, Java...) of C++ is doomed to be endless and unsatisfying, as the safe subset of C++ is tiny and almost useless, as unsafe primitives are the rule rather than the exception. Strict memory safety in C++ would mean no pointers and only references with automatic storage class. But in a language where the programmer is trusted by definition, some people will insist on using some (in principle) idiot-proof "smart pointer", even where there is no other advantage over raw pointers that one specific way to screw the program state is avoided.
Related
Shared pointers are good idea, no doubt. But as long as a large scale program includes raw pointers, I think there is a big risk in using shared pointers. Mainly, you will loose control of the real life-cycle of pointers to objects that hold raw pointers, and bugs will occur in locations which are more difficult to find and debug.
So my question is, was there no attempt to add to modern c++ a "weak pointer" which does not depend on using shared pointers? I mean just a pointer which becomes NULL when deleted in any part of the program. Is there a reason not to use such a self-made wrapper?
To better explain what I mean, the following is such a "weak pointer" that I made. I named it WatchedPtr.
#include <memory>
#include <iostream>
template <typename T>
class WatchedPtr {
public:
// the only way to allocate new pointer
template <typename... ARGS>
WatchedPtr(ARGS... args) : _ptr (new T(args...)), _allocated (std::make_shared<bool>(true)) {}
WatchedPtr(const WatchedPtr<T>& other) : _ptr (other._ptr), _allocated (other._allocated) {}
// delete the pointer
void del () {delete _ptr; *_allocated = false;}
auto& operator=(const WatchedPtr<T> &other) { return *this = other; }
bool isNull() const { return *_allocated; }
T* operator->() const { return _ptr; }
T& operator*() const { return *_ptr; }
private:
T* _ptr;
std::shared_ptr <bool> _allocated;
};
struct S {
int a = 1;
};
int main () {
WatchedPtr<S> p1;
WatchedPtr<S> p2(p1);
p1->a = 8;
std::cout << p1.isNull () << std::endl;
std::cout << p2.isNull () << std::endl;
p2.del ();
std::cout << p1.isNull () << std::endl;
std::cout << p1.isNull () << std::endl;
return 0;
}
Result:
1
1
0
0
-Edited-
Thank you all. Some clarifications following the comments and answers so far:
The implementation I presented for WatchedPtr is merely to demonstrate what I mean: a pointer that does not get the copy from external allocation, cannot be deleted externally, and becomes null if it is deleted. The implementation is knowingly far from perfect and was not meant to be perfect.
Problem with mix of shared_ptr and raw pointers is very common: A* is held as raw pointer, thus created at some point of the program and explicitly deleted at some point of the program. B holds a A*, and B* is held as shared_ptr, thus B* has vague lifespan. Thus B may live long after the deletion of A* that B holds.
The main usage of "WatchedPtr" in my mind is defensive programing. i.e. check for null and do the best thing possible for continuity (and a debug error). shared_ptr can do it, but in a very dangerous way - it will hide and delay the problem.
There can also be a design usage for "WatchedPtr" (very few and explicit "owners"), but this is not the main idea. For that indeed shared pointers are doing the job.
The intention of "WatchedPtr" is not for replacing all existing raw pointers in the program at once. It is not the same effort as replacing to shared_ptr, which IMHO has be done for the whole program at once. Which is unrealistic for large scale programs.
Weak pointers rely on notifications from the smart pointer infrastructure, so you could never do this with actual raw pointers.
One could imagine an extension of, say, unique_ptr which supported weak pointers, certainly. Presumably the main reason that nobody rushed in to implement such a feature is that weak pointers are already at the "Use refcounting and everything should just work" end of the scale, while unique_ptr is at the "Manage your lifetimes through RAII or you're not a real C++ programmer" end of the scale. Weak pointers also require there to be a separate control block per allocation, meaning that the performance advantage of such a WatchedPtr would be minimal compared to shared_ptr.
I think there is a big risk in using shared pointers. Mainly, you will loose control of the real life-cycle of pointers to objects that hold raw pointers, and bugs will occur in locations which are more difficult to find and debug.
Then you say
just a pointer which becomes NULL when deleted in any part of the program.
Don't you see the contradiction?
You don't want to use shared pointer because the lifetime of objects are determined at runtime. So far so good.
However, you want a pointer that automatically becomes null when the owner deletes it. The problem is if the lifetime of your pointer is known, you should not need that at all! If you know when the lifetime of your pointer ends, then you should be able to remove all instances of that pointer, a have a mean to check if the pointer is dead.
If you have a pointer that you don't know when the owner will free it and have no way to check or no observable side effect for the point of view of the weak owner, then do you really have control over lifetime of your pointer? Not really.
In fact, your implementation rely on containing a shared pointer. This is enlightening in the sense that you need some form of shared ownership in order to implement a raw pointer that can have weak pointer to it. Then if you need shared ownership to implement a raw pointer with weak references, you are left with a shared pointer. That's why the existence of your proposed class is contradictory.
std::shared_ptr + std::weak_ptr is made to deal with the issue of "parts of your program don't know when the owner free the resouce". What you need is a single std::shared_ptr and multiple std::weak_ptr, so they know when the resource is freed. These classes have the infomation needed to check the lifetime of a variable at runtime.
Or if in the contrary you know the lifetime of your pointers, then use that knowledge and find a way to remove dangling pointers, or expose a way to check for dangling pointers.
Reading the answers and comments, along with C++ Core Guidelines by Bjarne Stroustrup & Herb Sutter, I have come to the following answer:
When following the guidelines, there is no need for a "WatchedPtr" which involves "new" and "delete". However, a way to track the validity of a raw pointer taken from a smart pointer, is still in question for me, for debug/QA purposes.
In details:
Raw pointers should continue to be used. For various reasons. However, explicit "new" and "delete" should not. The cases of calling "new" and "delete" should all be replaced by shared_ptr/unique_ptr.
At the place where a raw pointer is currently allocated, there is no point in replacing it by "WatchedPtr".
If replacing a raw pointer to something else where it is allocated, it will be in most cases to unique_ptr, and on the other cases to shared_ptr. The "WatchedPtr", if at all, will continue from that point, built from the shared/unique pointer.
Therefor I have posted a somewhat different question.
I need a collection in which i can store heap-allocated objects having virtual functions.
I known about boost::shared_ptr, std::unique_ptr (C++11) and boost::ptr_(vector|list|map), but they doesn't solve duplicate pointer problem.
Just to describe a problem - i have a function which accepts heap-allocated pointer and stores it for future use:
void SomeClass::add(T* ptr)
{
_list.push_back(ptr);
}
But if i call add twice with same parameter ptr - _list will contain two pointers to same object and when _list is destructed multiple deletion of same object will occur.
If _list will count pointer which he stores and uses them at deletion time then this problem will be solved and objects will not be deleted multiple times.
So the question is:
Does somebody knows some library with collections (vector,list,map in essence) of pointer with auto-delete on destruction and support of reference counting?
Or maybe i can solve this problem using some other technique?
Update:
I need support of duplicate pointers. So i can't use std::set.
As Kerrek SB and Grizzly mentioned - it is a bad idea to use raw pointers in general and suggests to use std::make_shared and forget about instantiation via new. But this is responsibility of client-side code - not the class which i designs. Even if i change add signature (and _list container of course) to
void SomeClass::add(std::shared_ptr<T> ptr)
{
_list.push_back(ptr);
}
then somebody (who doesn't know about std::make_shared) still can write this:
SomeClass instance;
T* ptr = new T();
instance.add(ptr);
instance.add(ptr);
So this is not a full solution which i wait, but useful if you write code alone.
Update 2:
As an alternative solution i found a clonning (using generated copy constructor). I mean that i can change my add function like this:
template <typename R>
void SomeClass::add(const R& ref)
{
_list.push_back(new R(ref));
}
this will allow virtual method (R - class which extends some base class (interface)) calls and disallow duplicate pointers. But this solution has an overhead for clone.
Yes: std::list<std::shared_ptr<T>>.
The shared pointer is avaiable from <memory>, or on older platforms from <tr1/memory>, or from Boost's <boost/shared_ptr.hpp>. You won't need to delete anything manually, as the shared pointer takes care of this itself. You will however need to keep all your heap pointers inside a shared pointer right from the start:
std::shared_ptr<T> p(new T); // legacy
auto p = std::make_shared<T>(); // better
If you another shared pointer to the same object, make a copy of the shared pointer (rather than construct a new shared pointer from the underlying raw pointer): auto q = p;
The moral here is: If you're using naked pointers, something is wrong.
Realize that smart pointers are compared by comparing the underlying container. So you can just use a std::set of whatever smartpointer you prefer. Personally I use std::unique_ptr over shared_ptr, whenever I can get away with it, since it makes the ownership much clearer (whoever holds the unique_ptris the owner) and has much lower overhead too. I have found that this is enough for almost all my code. The code would look something like the following:
std::set<std::unique_ptr<T> > _list;
void SomeClass::add(T* ptr)
{
std::unique_ptr<T> p(ptr);
auto iter = _list.find(p);
if(iter == _list.end())
_list.insert(std::move(p));
else
p.release();
}
I'm not sure right now if that is overkill (have to check if insert is guaranteed not to do anything, if the insertion fails), but it should work. Doing this with shared_ptr<T> would look similar, although be a bit more complex, due to the lack of a relase member. In that case I would probably first construct a shared_ptr<T> with a do nothing deleter too pass to the call to find and then another shared_ptr<T> which is actually inserted.
Of course personally I would avoid doing this and always pass around smart pointers when the ownership of a pointer changes hands. Therefore I would rewrite SomeClass::add as void SomeClass::add(std::unique_ptr<T> ptr) or void SomeClass::add(std::shared_ptr<T> ptr) which would pretty much solve the problem of having multiple instances anyways (as long as the pointer is always wrapped).
I was recently introduced to the existence of auto_ptr and shared_ptr and I have a pretty simple/naive question.
I try to implement a data structure and I need to point to the children of a Node which (are more than 1 and its) number may change. Which is the best alternative and why:
class Node
{
public:
// ...
Node *children;
private:
//...
}
class Node
{
public:
// ...
share_ptr<Node> children;
private:
//...
}
I am not sure, but I think auto_ptr does not work for arrays. I am not, also, sure about whether I should use double pointers. Thanks for any help.
You're right that auto_ptr doesn't work for arrays. When it destroys the object it owns, it uses delete object;, so if you used new objects[whatever];, you'll get undefined behavior. Perhaps a bit more subtly, auto_ptr doesn't fit the requirements of "Copyable" (as the standard defines the term) so you can't create a container (vector, deque, list, etc.) of auto_ptr either.
A shared_ptr is for a single object as well. It's for a situation where you have shared ownership and need to delete the object only when all the owners go out of scope. Unless there's something going on that you haven't told us about, chances are pretty good that it doesn't fit your requirements very well either.
You might want to look at yet another class that may be new to you: Boost ptr_vector. At least based on what you've said, it seems to fit your requirements better than either auto_ptr or shared_ptr would.
I have used std::vector<std::shared_ptr<Node> > children successfully in a similar situation.
The main benefit of using a vector of shared_ptrs rather than an array is that all of the resource management is handled for you. This is especially handy in two situations:
1) When the vector is no longer in scope, it automatically calls delete on all of its contents. In this case, the reference count of the child Node will drop by 1 and if nothing else is referencing it, delete will be called on the object.
2) If you are referencing the Node elsewhere, there is no risk of being left with a dangling pointer to a deleted object. The object will only be deleted when there are no more references to it.
Unless you want behaviour that is substantially more complicated (perhaps there is a reason why an array is necessary), I would suggest this might be a good approach for you.
A simple implementation of the idea:
class Node {
private:
T contents;
std::vector<std::shared_ptr<Node> > children;
public:
Node(T value) : contents(value) {};
void add_child(T value) {
auto p = std::make_shared<Node>(value);
children.push_back(p);
}
std::shared_ptr<Node> get_child(size_t index) {
// Returning a shared pointer ensures the node isn't deleted
// while it is still in use.
return children.at(index);
}
void remove_child(size_t index) {
// The whole branch will be destroyed automatically.
// If part of the tree is still needed (eg. for undo), the
// shared pointer will ensure it is not destroyed.
children.erase(children.begin() + index);
}
};
auto_ptr is deprecated in favor of std::unique_ptr and btw. std::unique_ptr does work for arrays. You just need c++11 support. And there is already lots of resources about smart pointers and move semantics out there. The main difference between auto_ptr and unique_ptr is that auto_ptr does a move when you call the copy constructor and unique_ptr forbids the copy constructor, but allows a move when calling the move constructor. Therefore you need c++11 support with move semantics.
Stroustrup discusses the question of "What is an auto_ptr and why isn't there an auto_array" and concludes that there no need for the latter since the desired functionality can be accomplished with a vector.
http://www.stroustrup.com/bs_faq2.html#auto_ptr
I'm trying to learn C++, and trying to understand returning objects. I seem to see 2 ways of doing this, and need to understand what is the best practice.
Option 1:
QList<Weight *> ret;
Weight *weight = new Weight(cname, "Weight");
ret.append(weight);
ret.append(c);
return &ret;
Option 2:
QList<Weight *> *ret = new QList();
Weight *weight = new Weight(cname, "Weight");
ret->append(weight);
ret->append(c);
return ret;
(of course, I may not understand this yet either).
Which way is considered best-practice, and should be followed?
Option 1 is defective. When you declare an object
QList<Weight *> ret;
it only lives in the local scope. It is destroyed when the function exits. However, you can make this work with
return ret; // no "&"
Now, although ret is destroyed, a copy is made first and passed back to the caller.
This is the generally preferred methodology. In fact, the copy-and-destroy operation (which accomplishes nothing, really) is usually elided, or optimized out and you get a fast, elegant program.
Option 2 works, but then you have a pointer to the heap. One way of looking at C++ is that the purpose of the language is to avoid manual memory management such as that. Sometimes you do want to manage objects on the heap, but option 1 still allows that:
QList<Weight *> *myList = new QList<Weight *>( getWeights() );
where getWeights is your example function. (In this case, you may have to define a copy constructor QList::QList( QList const & ), but like the previous example, it will probably not get called.)
Likewise, you probably should avoid having a list of pointers. The list should store the objects directly. Try using std::list… practice with the language features is more important than practice implementing data structures.
Use the option #1 with a slight change; instead of returning a reference to the locally created object, return its copy.
i.e. return ret;
Most C++ compilers perform Return value optimization (RVO) to optimize away the temporary object created to hold a function's return value.
In general, you should never return a reference or a pointer. Instead, return a copy of the object or return a smart pointer class which owns the object. In general, use static storage allocation unless the size varies at runtime or the lifetime of the object requires that it be allocated using dynamic storage allocation.
As has been pointed out, your example of returning by reference returns a reference to an object that no longer exists (since it has gone out of scope) and hence are invoking undefined behavior. This is the reason you should never return a reference. You should never return a raw pointer, because ownership is unclear.
It should also be noted that returning by value is incredibly cheap due to return-value optimization (RVO), and will soon be even cheaper due to the introduction of rvalue references.
passing & returning references invites responsibilty.! u need to take care that when you modify some values there are no side effects. same in the case of pointers. I reccomend you to retun objects. (BUT IT VERY-MUCH DEPENDS ON WHAT EXACTLY YOU WANT TO DO)
In ur Option 1, you return the address and Thats VERY bad as this could lead to undefined behaviour. (ret will be deallocated, but y'll access ret's address in the called function)
so use return ret;
It's generally bad practice to allocate memory that has to be freed elsewhere. That's one of the reasons we have C++ rather than just C. (But savvy programmers were writing object-oriented code in C long before the Age of Stroustrup.) Well-constructed objects have quick copy and assignment operators (sometimes using reference-counting), and they automatically free up the memory that they "own" when they are freed and their DTOR automatically is called. So you can toss them around cheerfully, rather than using pointers to them.
Therefore, depending on what you want to do, the best practice is very likely "none of the above." Whenever you are tempted to use "new" anywhere other than in a CTOR, think about it. Probably you don't want to use "new" at all. If you do, the resulting pointer should probably be wrapped in some kind of smart pointer. You can go for weeks and months without ever calling "new", because the "new" and "delete" are taken care of in standard classes or class templates like std::list and std::vector.
One exception is when you are using an old fashion library like OpenCV that sometimes requires that you create a new object, and hand off a pointer to it to the system, which takes ownership.
If QList and Weight are properly written to clean up after themselves in their DTORS, what you want is,
QList<Weight> ret();
Weight weight(cname, "Weight");
ret.append(weight);
ret.append(c);
return ret;
As already mentioned, it's better to avoid allocating memory which must be deallocated elsewhere. This is what I prefer doing (...these days):
void someFunc(QList<Weight *>& list){
// ... other code
Weight *weight = new Weight(cname, "Weight");
list.append(weight);
list.append(c);
}
// ... later ...
QList<Weight *> list;
someFunc(list)
Even better -- avoid new completely and using std::vector:
void someFunc(std::vector<Weight>& list){
// ... other code
Weight weight(cname, "Weight");
list.push_back(weight);
list.push_back(c);
}
// ... later ...
std::vector<Weight> list;
someFunc(list);
You can always use a bool or enum if you want to return a status flag.
Based on experience, do not use plain pointers because you can easily forget to add proper destruction mechanisms.
If you want to avoid copying, you can go for implementing the Weight class with copy constructor and copy operator disabled:
class Weight {
protected:
std::string name;
std::string desc;
public:
Weight (std::string n, std::string d)
: name(n), desc(d) {
std::cout << "W c-tor\n";
}
~Weight (void) {
std::cout << "W d-tor\n";
}
// disable them to prevent copying
// and generate error when compiling
Weight(const Weight&);
void operator=(const Weight&);
};
Then, for the class implementing the container, use shared_ptr or unique_ptr to implement the data member:
template <typename T>
class QList {
protected:
std::vector<std::shared_ptr<T>> v;
public:
QList (void) {
std::cout << "Q c-tor\n";
}
~QList (void) {
std::cout << "Q d-tor\n";
}
// disable them to prevent copying
QList(const QList&);
void operator=(const QList&);
void append(T& t) {
v.push_back(std::shared_ptr<T>(&t));
}
};
Your function for adding an element would make use or Return Value Optimization and would not call the copy constructor (which is not defined):
QList<Weight> create (void) {
QList<Weight> ret;
Weight& weight = *(new Weight("cname", "Weight"));
ret.append(weight);
return ret;
}
On adding an element, the let the container take the ownership of the object, so do not deallocate it:
QList<Weight> ql = create();
ql.append(*(new Weight("aname", "Height")));
// this generates segmentation fault because
// the object would be deallocated twice
Weight w("aname", "Height");
ql.append(w);
Or, better, force the user to pass your QList implementation only smart pointers:
void append(std::shared_ptr<T> t) {
v.push_back(t);
}
And outside class QList you'll use it like:
Weight * pw = new Weight("aname", "Height");
ql.append(std::shared_ptr<Weight>(pw));
Using shared_ptr you could also 'take' objects from collection, make copies, remove from collection but use locally - behind the scenes it would be only the same only object.
All of these are valid answers, avoid Pointers, use copy constructors, etc. Unless you need to create a program that needs good performance, in my experience most of the performance related problems are with the copy constructors, and the overhead caused by them. (And smart pointers are not any better on this field, I'd to remove all my boost code and do the manual delete because it was taking too much milliseconds to do its job).
If you're creating a "simple" program (although "simple" means you should go with java or C#) then use copy constructors, avoid pointers and use smart pointers to deallocate the used memory, if you're creating a complex programs or you need a good performance, use pointers all over the place, and avoid copy constructors (if possible), just create your set of rules to delete pointers and use valgrind to detect memory leaks,
Maybe I will get some negative points, but I think you'll need to get the full picture to take your design choices.
I think that saying "if you're returning pointers your design is wrong" is little misleading. The output parameters tends to be confusing because it's not a natural choice for "returning" results.
I know this question is old, but I don't see any other argument pointing out the performance overhead of that design choices.
I have objects which create other child objects within their constructors, passing 'this' so the child can save a pointer back to its parent. I use boost::shared_ptr extensively in my programming as a safer alternative to std::auto_ptr or raw pointers. So the child would have code such as shared_ptr<Parent>, and boost provides the shared_from_this() method which the parent can give to the child.
My problem is that shared_from_this() cannot be used in a constructor, which isn't really a crime because 'this' should not be used in a constructor anyways unless you know what you're doing and don't mind the limitations.
Google's C++ Style Guide states that constructors should merely set member variables to their initial values. Any complex initialization should go in an explicit Init() method. This solves the 'this-in-constructor' problem as well as a few others as well.
What bothers me is that people using your code now must remember to call Init() every time they construct one of your objects. The only way I can think of to enforce this is by having an assertion that Init() has already been called at the top of every member function, but this is tedious to write and cumbersome to execute.
Are there any idioms out there that solve this problem at any step along the way?
Use a factory method to 2-phase construct & initialize your class, and then make the ctor & Init() function private. Then there's no way to create your object incorrectly. Just remember to keep the destructor public and to use a smart pointer:
#include <memory>
class BigObject
{
public:
static std::tr1::shared_ptr<BigObject> Create(int someParam)
{
std::tr1::shared_ptr<BigObject> ret(new BigObject(someParam));
ret->Init();
return ret;
}
private:
bool Init()
{
// do something to init
return true;
}
BigObject(int para)
{
}
BigObject() {}
};
int main()
{
std::tr1::shared_ptr<BigObject> obj = BigObject::Create(42);
return 0;
}
EDIT:
If you want to object to live on the stack, you can use a variant of the above pattern. As written this will create a temporary and use the copy ctor:
#include <memory>
class StackObject
{
public:
StackObject(const StackObject& rhs)
: n_(rhs.n_)
{
}
static StackObject Create(int val)
{
StackObject ret(val);
ret.Init();
return ret;
}
private:
int n_;
StackObject(int n = 0) : n_(n) {};
bool Init() { return true; }
};
int main()
{
StackObject sObj = StackObject::Create(42);
return 0;
}
Google's C++ programming guidelines have been criticized here and elsewhere again and again. And rightly so.
I use two-phase initialization only ever if it's hidden behind a wrapping class. If manually calling initialization functions would work, we'd still be programming in C and C++ with its constructors would never have been invented.
Depending on the situation, this may be a case where shared pointers don't add anything. They should be used anytime lifetime management is an issue. If the child objects lifetime is guaranteed to be shorter than that of the parent, I don't see a problem with using raw pointers. For instance, if the parent creates and deletes the child objects (and no one else does), there is no question over who should delete the child objects.
KeithB has a really good point that I would like to extend (in a sense that is not related to the question, but that will not fit in a comment):
In the specific case of the relation of an object with its subobjects the lifetimes are guaranteed: the parent object will always outlive the child object. In this case the child (member) object does not share the ownership of the parent (containing) object, and a shared_ptr should not be used. It should not be used for semantic reasons (no shared ownership at all) nor for practical reasons: you can introduce all sorts of problems: memory leaks and incorrect deletions.
To ease discussion I will use P to refer to the parent object and C to refer to the child or contained object.
If P lifetime is externally handled with a shared_ptr, then adding another shared_ptr in C to refer to P will have the effect of creating a cycle. Once you have a cycle in memory managed by reference counting you most probably have a memory leak: when the last external shared_ptr that refers to P goes out of scope, the pointer in C is still alive, so the reference count for P does not reach 0 and the object is not released, even if it is no longer accessible.
If P is handled by a different pointer then when the pointer gets deleted it will call the P destructor, that will cascade into calling the C destructor. The reference count for P in the shared_ptr that C has will reach 0 and it will trigger a double deletion.
If P has automatic storage duration, when it's destructor gets called (the object goes out of scope or the containing object destructor is called) then the shared_ptr will trigger the deletion of a block of memory that was not new-ed.
The common solution is breaking cycles with weak_ptrs, so that the child object would not keep a shared_ptr to the parent, but rather a weak_ptr. At this stage the problem is the same: to create a weak_ptr the object must already be managed by a shared_ptr, which during construction cannot happen.
Consider using either a raw pointer (handling ownership of a resource through a pointer is unsafe, but here ownership is handled externally so that is not an issue) or even a reference (which also is telling other programmers that you trust the referred object P to outlive the referring object C)
A object that requires complex construction sounds like a job for a factory.
Define an interface or an abstract class, one that cannot be constructed, plus a free-function that, possibly with parameters, returns a pointer to the interface, but behinds the scenes takes care of the complexity.
You have to think of design in terms of what the end user of your class has to do.
Do you really need to use the shared_ptr in this case? Can the child just have a pointer? After all, it's the child object, so it's owned by the parent, so couldn't it just have a normal pointer to it's parent?