In our large project we have a lot class with the following typedef's:
class Foo
{
public:
typedef std::auto_ptr<Foo> Ptr;
typedef boost::shared_ptr<Foo> Ref;
...
};
...
Foo::Ref foo(new Foo);
...
doBar(foo);
...
The using of them is very convenient. But I doubt if auto_ptr is semantically close to Ptr and shared_ptr is the same as ref? Or should auto_ptr be used explicitly since it has "ownership transfer" semantics?
Thanks,
std::auto_ptr has ownership transfer semantics, but it's quite broken. If you can use boost::shared_ptr, then you should use boost::unique_ptr instead of std::auto_ptr, since it does what one would expect. It transfers ownership and makes the previous instance invalid, which std::auto_ptr doesn't.
Even better, if you can use C++11, then swap to std::unique_ptr and std::shared_ptr.
You shouldn't use std::auto_ptr, its deprecated and I consider it dangerous, even more so when you hide it behind such a generic typedef as Ptr.
I don't think it makes any sense to called shared_ptrRef, in this case it is more Ptr than auto_ptr.
EDIT: I consider it dangerous because you can easily misuse it, even when you fully understand its workings, you can accidentally misuse it, especially when hiding it behind a typedef. A good class should be easy to use right and should be difficult to misuse. Especially with the advent of unique_ptr I can't see any useful scenario for auto_ptr.
I believe the order is just a nomenclature which someone used.
It probably should have been, ref for auto_ptr and ptr for shared_ptr, because:
References are immutable and hence cannot be made to refer to other object. auto_ptr has a similar(albeit remotely similar) semantics, transfer of ownership which means you would probably not want to assign a auto_ptr for the non-intuitive behavior it shows. The assigned object gains ownership while the object being assigned loses ownership.
On the Other hand shared_ptr has an reference counting mechanism which is similar(again remotely) to multiple pointers which can point to the same object. The ownership of the pointer rests with the shared_ptr itself and it gets deallocated as soon as there are no pointer instances referring to it.
A lot depends on what they are being used for. And in the case of
Ref, what people understand by it. In pre-standard days, I would
often use a typedef to Ptr for my (invasive) reference counted
pointer; the presence of such a typedef was, in fact, an indication that
the type supported reference counting, and that it should always be
dynamically allocated.
Both std::auto_ptr and boost::shared_ptr have very special
semantics. I'd tend not to use typedefs for them, both because of the
special semantics, and because (unlike the case of my invasive reference
counted pointer) they are totally independent of the type pointed to.
For any given type, you can use them or not, as the program logic
requires. (Because of its particular semantics, I find a fair number of
uses for std::auto_ptr. I tend to avoid boost::shared_ptr, however;
it's rather dangerous.)
auto_ptr is deprecated in C++11. You might want to stop using it and just use the shared_ptr. For shared_ptr, there is no ownership transfer on assignment, the number of references to the object is counted and the object is destroyed when the last pointer is destroyed.
Related
As per the document, which says that:
std::unique_ptr is commonly used to manage the lifetime of objects, including:
as the element type in move-aware containers, such as std::vector,
which hold pointers to dynamically-allocated objects (e.g. if
polymorphic behavior is desired) [emphasis mine]
How to understand it in the right way?
Some simple example may helps.
Thanks.
This statement refers to the fact that it was notoriously unsafe to use std::auto_ptr as an element in containers. std::auto_ptr is deprecated already since C++11, but before that there was no move semantics in C++, and auto_ptr was the only "smart" pointer that was really standartized.
std::auto_ptr was similar to std::unique_ptr in that it was destroying the underlying object in the destructor. In contrast to the unique pointer, auto pointer allowed copying: the ownership of the object was transferring to the destination, while the source object was losing the ownership, still pointing to the object. This behavior was unsafe in containers: while you may rely on that the container of auto pointers keeps the ownership, you may easily lose it by simple access to the element (imagine what would happen when you would access the element next time after the actual temporary owner would be destroyed).
It worth mentioning that the STL library provided by Visual Studio C++ 6.0 had a different implementation of auto_ptr: instead of transferring the ownership this "smart" pointer was transferring everything, even the pointer value itself. That meant that the source auto_ptr was becoming empty after the assignment to another "copy" of this pointer. That was safer to store the objects like that in a container, but it caused other problems anyway.
Overall, the problems with usage auto_ptr in containers was [probably] the main reasons why std::auto_ptr was deprecated in C++11.
Returning back to std::unique_ptr: it is safe to use it in containers, and the standard clearly makes an explicit statement referring to the previous problems with std::auto_ptr.
I was reading Top 10 dumb mistakes to avoid with C++11 smart pointer.
Number #5 reads:
Mistake # 5 : Not assigning an object(raw pointer) to a shared_ptr as
soon as it is created !
int main()
{
Aircraft* myAircraft = new Aircraft("F-16");
shared_ptr<aircraft> pAircraft(myAircraft);
...
shared_ptr<aircraft> p2(myAircraft);
// will do a double delete and possibly crash
}
and the recommendation is something like:
Use make_shared or new and immediately construct the pointer with
it.
Ok, no doubt about it the problem and the recommendation.
However I have a question about the design of shared_ptr.
This is a very easy mistake to make and the whole "safe" design of shared_ptr could be thrown away by very easy-to-detect missuses.
Now the question is, could this be easily been fixed with an alternative design of shared_ptr in which the only constructor from raw pointer would be that from a r-value reference?
template<class T>
struct shared_ptr{
shared_ptr(T*&& t){...basically current implementation...}
shared_ptr(T* t) = delete; // this is to...
shared_ptr(T* const& t) = delete; // ... illustrate the point.
shared_ptr(T*& t) = delete;
...
};
In this way shared_ptr could be only initialized from the result of new or some factory function.
Is this an underexploitation of the C++ language in the library?
or What is the point of having a constructor from raw pointer (l-value) reference if this is going to be most likely a misuse?
Is this a historical accident? (e.g. shared_ptr was proposed before r-value references were introduced, etc) Backwards compatibility?
(Of course one could say std::shared_ptr<type>(std::move(ptr)); that that is easier to catch and also a work around if this is really necessary.)
Am I missing something?
Pointers are very easy to copy. Even if you restrict to r-value reference you can sill easily make copies (like when you pass a pointer as a function parameter) which will invalidate the safety setup. Moreover you will run into problems in templates where you can easily have T* const or T*& as a type and you get type mismatches.
So you are proposing to create more restrictions without significant safety gains, which is likely why it was not in the standard to begin with.
The point of make_shared is to atomize the construction of a shared pointer. Say you have f(shared_ptr<int>(new int(5)), throw_some_exception()). The order of parameter invokation is not guaranteed by the standard. The compiler is allowed to create a new int, run throw_some_exception and then construct the shared_ptr which means that you could leak the int (if throw_some_exception actually throws an exception). make_shared just creates the object and the shared pointer inside itself, which doesn't allow the compiler to change the order, so it becomes safe.
I do not have any special insight into the design of shared_ptr, but I think the most likely explanation is that the timelines involved made this impossible:
The shared_ptr was introduced at the same time as rvalue-references, in C++11. The shared_ptr already had a working reference implementation in boost, so it could be expected to be added to standard libraries relatively quickly.
If the constructor for shared_ptr had only supported construction from rvalue references, it would have been unusable until the compiler had also implemented support for rvalue references.
And at that time, compiler and standards development was much more asynchronous, so it could have taken years until all compiler had implemented support, if at all. (export templates were still fresh on peoples minds in 2011)
Additionally, I assume the standards committee would have felt uncomfortable standardizing an API that did not have a reference implementation, and could not even get one until after the standard was published.
There's a number of cases in which you may not be able to call make_shared(). For example, your code may not be responsible for allocating and constructing the class in question. The following paradigm (private constructors + factory functions) is used in some C++ code bases for a variety of reasons:
struct A {
private:
A();
};
A* A_factory();
In this case, if you wanted to stick the A* you get from A_factory() into a shared_ptr<>, you'd have to use the constructor which takes a raw pointer instead of make_shared().
Off the top of my head, some other examples:
You want to get aligned memory for your type using posix_memalign() and then store it in a shared_ptr<> with a custom deleter that calls free() (this use case will go away soon when we add aligned allocation to the language!).
You want to stick a pointer to a memory-mapped region created with mmap() into a shared_ptr<> with a custom deleter that calls munmap() (this use case will go away when we get a standardized facility for shmem, something I'm hoping to work on in the next few months).
You want to stick a pointer allocated by into a shared_ptr<> with a custom deleter.
I've been making some objects using the pimpl idiom, but I'm not sure whether to use std::shared_ptr or std::unique_ptr.
I understand that std::unique_ptr is more efficient, but this isn't so much of an issue for me, as these objects are relatively heavyweight anyway so the cost of std::shared_ptr over std::unique_ptr is relatively minor.
I'm currently going with std::shared_ptr just because of the extra flexibility. For example, using a std::shared_ptr allows me to store these objects in a hashmap for quick access while still being able to return copies of these objects to callers (as I believe any iterators or references may quickly become invalid).
However, these objects in a way really aren't being copied, as changes affect all copies, so I was wondering that perhaps using std::shared_ptr and allowing copies is some sort of anti-pattern or bad thing.
Is this correct?
I've been making some objects using the pimpl idiom, but I'm not sure whether to used shared_ptr or unique_ptr.
Definitely unique_ptr or scoped_ptr.
Pimpl is not a pattern, but an idiom, which deals with compile-time dependency and binary compatibility. It should not affect the semantics of the objects, especially with regard to its copying behavior.
You may use whatever kind of smart pointer you want under the hood, but those 2 guarantee that you won't accidentally share the implementation between two distinct objects, as they require a conscious decision about the implementation of the copy constructor and assignment operator.
However, these objects in a way really aren't being copied, as changes affect all copies, so I was wondering that perhaps using shared_ptr and allowing copies is some sort of anti-pattern or bad thing.
It is not an anti-pattern, in fact, it is a pattern: Aliasing. You already use it, in C++, with bare pointers and references. shared_ptr offer an extra measure of "safety" to avoid dead references, at the cost of extra complexity and new issues (beware of cycles which create memory leaks).
Unrelated to Pimpl
I understand unique_ptr is more efficient, but this isn't so much of an issue for me, as these objects are relatively heavyweight anyway so the cost of shared_ptr over unique_ptr is relatively minor.
If you can factor out some state, you may want to take a look at the Flyweight pattern.
If you use shared_ptr, it's not really the classical pimpl
idiom (unless you take additional steps). But the real question
is why you want to use a smart pointer to begin with; it's very
clear where the delete should occur, and there's no issue of
exception safety or other to be concerned with. At most,
a smart pointer will save you a line or two of code. And the
only one which has the correct semantics is boost::scoped_ptr,
and I don't think it works in this case. (IIRC, it requires
a complete type in order to be instantiated, but I could be
wrong.)
An important aspect of the pimpl idiom is that its use should be
transparent to the client; the class should behave exactly as if
it were implemented classically. This means either inhibiting
copy and assignment or implementing deep copy, unless the class
is immutable (no non-const member functions). None of the usual
smart pointers implement deep copy; you could implement one, of
course, but it would probably still require a complete type
whenever the copy occurs, which means that you'd still have to
provide a user defined copy constructor and assignment operator
(since they can't be inline). Given this, it's probably not
worth the bother using the smart pointer.
An exception is if the objects are immutable. In this case, it
doesn't matter whether the copy is deep or not, and shared_ptr
handles the situation completely.
When you use a shared_ptr (for example in a container, then look this up and return it by-value), you are not causing a copy of the object it points to, simply a copy of the pointer with a reference count.
This means that if you modify the underlying object from multiple points, then you affect changes on the same instance. This is exactly what it is designed for, so not some anti-pattern!
When passing a shared_ptr (as the comments say,) it's better to pass by const reference and copy (there by incrementing the reference count) where needed. As for return, case-by-case.
Yes, please use them. Simply put, the shared_ptr is an implementation of smart pointer. unique_ptr is an implementation of automatic pointer:
We've pretty much moved over to using boost::shared_ptr in all of our code, however we still have some isolated cases where we use std::auto_ptr, including singleton classes:
template < typename TYPE >
class SharedSingleton
{
public:
static TYPE& Instance()
{
if (_ptrInstance.get() == NULL)
_ptrInstance.reset(new TYPE);
return *_ptrInstance;
}
protected:
SharedSingleton() {};
private:
static std::auto_ptr < TYPE > _ptrInstance;
};
I've been told that there's a very good reason why this hasn't been made a shared_ptr, but for the life of me I can't understand why? I know that auto_ptr will eventually get marked as depreciated in the next standard, so I'd like to know what/how I can replace this implementation.
Also, are there any other reasons why you'd consider using an auto_ptr instead of a shared_ptr? And do you see any problems moving to shared_ptr in the future?
Edit:
So in answer to "can I safely replace auto_ptr with shared_ptr in the above code", the answer is yes - however I'll take a small performance hit.
When auto_ptr is eventually marked as depreciated and we move over to std::shared_ptr, we'll need to thoroughly test our code to make sure we're abiding by the different ownership semantics.
auto_ptr and shared_ptr solve entirely different problems. One does not replace the other.
auto_ptr is a thin wrapper around pointers to implement RAII semantics, so that resources are always released, even when facing exceptions. auto_ptr does not perform any reference counting or the like at all, it does not make multiple pointers point to the same object when creating copies. In fact, it's very different. auto_ptr is one of the few classes where the assignment operator modifies the source object. Consider this shameless plug from the auto_ptr wikipedia page:
int *i = new int;
auto_ptr<int> x(i);
auto_ptr<int> y;
y = x;
cout << x.get() << endl; // Print NULL
cout << y.get() << endl; // Print non-NULL address i
Note how executing
y = x;
modifies not only y but also x.
The boost::shared_ptr template makes it easy to handle multiple pointers to the same object, and the object is only deleted after the last reference to it went out of scope. This feature is not useful in your scenario, which (attempts to) implement a Singleton. In your scenario, there's always either 0 references to 1 reference to the only object of the class, if any.
In essence, auto_ptr objects and shared_ptr objects have entirely different semantics (that's why you cannot use the former in containers, but doing so with the latter is fine), and I sure hope you have good tests to catch any regressions you introduced while porting your code. :-}
Others have answered why this code uses an auto_ptr instead of a shared_ptr. To address your other questions:
What/how I can replace this implementation?
Use either boost::scoped_ptr or unique_ptr (available in both Boost and the new C++ standard). Both scoped_ptr and unique_ptr provide strict ownership (and no reference counting overhead), andthey avoid the surprising delete-on-copy semantics of auto_ptr.
Also, are there any other reasons why you'd consider using an auto_ptr instead of a shared_ptr? And do you see any problems moving to shared_ptr in the future?
Personally, I would not use an auto_ptr. Delete-on-copy is just too non-intuitive. Herb Sutter seems to agree. Switching to scoped_ptr, unique_ptr, or shared_ptr should offer no problems. Specifically, shared_ptr should be a drop-in replacement if you don't care about the reference counting overhead. scoped_ptr is a drop-in replacement if you aren't using auto_ptr's transfer-of-ownership capabilities. If you are using transfer-of-ownership, then unique_ptr is almost a drop-in replacement, except that you need to instead explicitly call move to transfer ownership. See here for an example.
auto_ptr is the only kind of smart pointer I use. I use it because I don't use Boost, and because I generally prefer my business/application-oriented classes to explicitly
define deletion semantics and order, rather than depend on
collections of, or individual, smart pointers.
In my opinion, a class should provide a well defined abstraction and no private members should be modified without the knowledge of class. But when I checked the "auto_ptr" (or any other smart pointer), this rule is violated. Please see the following code
class Foo{
public:
Foo(){}
};
int main(int argc, char* argv[])
{
std::auto_ptr<Foo> fooPtr(new Foo);
delete fooPtr.operator ->();
return 0;
}
The operator overload (->) gives the underlying pointer and it can be modified without the knowledge of "auto_ptr". I can't think it as a bad design as the smart pointers are designed by C++ geeks, but I am wondering why they allowed this. Is there any way to write a smart pointer without this problem.
Appreciate your thoughts.
There are two desirable properties a smart pointer should have:
The raw pointer can be retrieved (e.g. for passing to legacy library functions)
The raw pointer cannot be retrieved (to prevent double-delete)
Obviously, these properties are contradictory and cannot be realised at the same time! Even Boost's shared_ptr<Foo> et al. have get(), so they have this "problem." In practice, the first is more important, so the second has to go.
By the way, I'm not sure why you reached for the slightly obscure operator->() when the ordinary old get() method causes the same problem:
std::auto_ptr<Foo> fooPtr(new Foo);
delete fooPtr.get();
In order to provide fast, convenient, "pointer-like" access to the underlying object, operator-> unfortunately has to "leak" its abstraction a bit. Otherwise, smart pointers would have to manually wrap all of the members that are allowed to be exposed. These either requires a lot of "configuration" work on the part of those instantiating the smart pointer, or a level of meta-programming that just isn't present in C++. Besides, as pyrsta points out, even if this hole was plugged, there are still many other (perhaps non-standard) ways to subvert C++'s access control mechanisms.
Is there any way to write a smart pointer without this problem.
It isn't easy, and generally no (i.e., you can't do it for every, general Foo class).
The only way I can think of, to do this, would be by changing the declaration of the Foo class: make the Foo destructor private (or define a private delete operator as a member of the Foo class), and also specify in the declaration of the Foo class that std::auto_ptr<Foo> is a friend.
No, there's no way to completely prohibit such bad usage in C++.
As a general rule, the user of any library code should never call delete on any wrapped pointers unless specifically documented. And in my opinion, all modern C++ code should be designed so that the user of the classes never was left the full responsibility to manually release her acquired resources (ie. use RAII instead).
Aside note: std::auto_ptr<T> isn't the best option anymore. Its bad behaviour on copying can lead to serious coding errors. Often a better idea is to use std::tr1::scoped_ptr<T> or std::tr1::shared_ptr<T> or their Boost variants instead.
Moreover, in C++0x, std::unique_ptr<T> will functionally supercede std::auto_ptr<T> as a safer-to-use class. Some discussion on the topic and a recent C++03 implementation for unique_ptr emulation can be found here.
I don't think this shows that auto_ptr has an encapsulation problem. Whenever dealing with owned pointers, it is critical for people to understand who owns what. In the case of auto_ptr, it owns the pointer that it holds[1]; this is part of auto_ptr's abstraction. Therefore, deleting that pointer in any other way violates the contract that auto_ptr provides.
I'd agree that it's relatively easy to mis-use auto_ptr[2], which is very not ideal, but in C++, you can never avoid the fundamental issue of "who owns this pointer?", because for better or worse, C++ does not manage memory for you.
[1] Quote from cplusplus.com: "auto_ptr objects have the peculiarity of taking ownership of the pointers assigned to them": http://www.cplusplus.com/reference/std/memory/auto_ptr/
[2] For example, you might mistakenly believe that it has value semantics, and use it as a vector template parameter: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEEQFjAD&url=http%3A%2F%2Fwww.gamedev.net%2Ftopic%2F502150-c-why-is-stdvectorstdauto_ptrmytype--bad%2F&ei=XU1qT5i9GcnRiAKCiu20BQ&usg=AFQjCNHigbgumbMG3MTmMPla2zo4LhaE1Q&sig2=WSyJF2eWrq2aB2qw8dF3Dw
I think this question addresses a non-issue. Smart pointers are there to manage ownership of pointers, and if doing so they make the pointer inaccessible, they fail their purpose.
Also consider this. Any container type gives you iterators over them; if it is such an iterator then &*it is a pointer to an item in the container; if you say delete &*it then you are dead. But exposing the adresses of its items is not a defect of container types.