I've noticed a strange fact about shared_ptr
int* p = nullptr;
std::shared_ptr<int> s(p); // create a count (1).
std::shared_ptr<int> s2(s); // count go to 2.
assert(s.use_count() == 2);
I wonder what is the semantic beyond this. Why are s and s2 sharing a nullptr ? Does it makes any sense ?
Or maybe this uncommon situation doesn't deserve a if statement (costly ?) ?
Thanks for any enlightenment.
The semantics are:
If you default-construct a shared pointer, or construct one from nullptr_t, it's empty; that is, it doesn't own any pointer.
If you construct one from a raw pointer, it takes ownership of that pointer, whether or not it's null. I guess that's done for the reason you mention (avoiding a runtime check), but I can only speculate about that.
So your example isn't empty; it owns a null pointer.
Who cares that they are sharing the nullptr? As soon as they are both destroyed, they will attempt to delete nullptr; which will have no effect. That's perfectly okay and fits in with the semantics of shared_ptr.
It makes sense in the way that no special case is required for null pointer values. Whether it occurs in your code just comes down to whether you have some function that can take or return null shared_ptrs.
shared_ptr will count and free whatever pointer it was given(not only nullptr, but something invalid as well). so in your example this will end up with nullptr being deleted which is valid case.
Is it still safe to delete nullptr in c++0x?
Related
Looking at this implementation of std::shared_ptr https://thecandcppclub.com/deepeshmenon/chapter-10-shared-pointers-and-atomics-in-c-an-introduction/781/ :
Question 1 : I can see that we're using std::atomic<int*> to store the pointer to the reference count associated with the resource being managed. Now, in the destructor of the shared_ptr, we're changing the value of the ref-count itself (like --(*reference_count)). Similarly, when we make a copy of the shared_ptr, we increment the ref-count value. However, in both these operations, we're not changing the value of the pointer to the ref-count but rather ref-count itself. Since the pointer to ref-count is the "atomic thing" here, I was wondering how would ++ / -- operations to the ref-count be thread-safe? Is std::atomic implemented internally in a way such that in case of pointers, it ensures changes to the underlying object itself are also thread-safe?
Question 2 : Do we really need this nullptr check in default_deleter class before calling delete on ptr? As per Is it safe to delete a NULL pointer?, it is harmless to call delete on nullptr.
Question 1:
The implementation linked to is not thread-safe at all. You are correct that the shared reference counter should be atomic, not pointers to it. std::atomic<int*> here makes no sense.
Note that just changing std::atomic<int*> to std::atomic<int>* won't be enough to fix this either. For example the destructor is decrementing the reference count and checking it against 0 non-atomically. So another thread could get in between these two operations and then they will both think that they should delete the object causing undefined behavior.
As mentioned by #fabian in the comments, it is also far from a correct non-thread-safe shared pointer implementation. For example with the test case
{
Shared_ptr<int> a(new int);
Shared_ptr<int> b(new int);
b = a;
}
it will leak the second allocation. So it doesn't even do the basics correctly.
Even more, in the simple test case
{
Shared_ptr<int> a(new int);
}
it leaks the allocated memory for the reference counter (which it always leaks).
Question 2:
There is no reason to have a null pointer check there except to avoid printing the message. In fact, if we want to adhere to the standard's specification of std::default_delete for default_deleter, then at best it is wrong to check for nullptr, since that is specified to call delete unconditionally.
But the only possible edge case where this could matter is if a custom operator delete would be called that causes some side effect for a null pointer argument. However, it is anyway unspecified whether delete will call operator delete if passed a null pointer, so that's not practically relevant either.
Is it always wise to use NULL after a delete in legacy code without any smartpointers to prevent dangling pointers? (bad design architecture of the legacy code excluded)
int* var = new int(100);
delete var;
var = NULL;
Does it also make sense in destructors?
In a getter, does it make sense to test for NULL in second step?
Or is it undefinied behavier anyway?
Foo* getPointer() {
if (m_var!=NULL) { // <-is this wise
return m_var;
}
else {
return nullptr;
}
}
What about this formalism as an alternative? In which cases will it crash?
Foo* getPointer() {
if (m_var) { // <-
return m_var;
}
else {
return nullptr;
}
}
(Edit) Will the code crash in example 3./4. if A. NULL is used after delete or B. NULL is not used after delete.
Is it always wise to use NULL after a delete in legacy code without any smartpointers to prevent dangling pointers? (bad design architecture of the legacy code excluded)
int* var = new int(100);
// ...
delete var;
var = NULL;
Only useful if you test var afterward.
if scope ends, or if you set other value, setting to null is unneeded.
Does it also make sense in destructors?
nullify members in destructor is useless as you cannot access them without UB afterward anyway. (but that might help with debugger)
In a getter, does it make sense to test for NULL in second step? Or is it undefinied behavier anyway?
[..]
[..]
if (m_var != NULL) and if (m_var) are equivalent.
It is unneeded, as, if pointer is nullptr, you return nullptr,
if pointer is not nullptr, you return that pointer, so your getter can simply be
return m_var;
Avoid writing code like this
int* var = new int(100);
// ... do work ...
delete var;
This is prone to memory leaks if "do work" throws, returns or otherwise breaks out of current scope (it may not be the case right now but later when "do work" needs to be extended/changed). Always wrap heap-allocated objects in RAII such that the destructor always runs on scope exit, freeing the memory.
If you do have code like this, then setting var to NULL or even better a bad value like -1 in a Debug build can be helpful in catching use-after-free and double-delete errors.
In case of a destructor:
Setting the pointer to NULL in a destructor is not needed.
In production code it's a waste of CPU time (writing a value that will never be read again).
In debug code it makes catching double-deletes harder. Some compilers fill deleted objects with a marker like 0xDDDDDDDD such that a second delete or any other dereference of the pointer will cause a memory access exception. If the pointer is set to NULL, delete will silently ignore it, hiding the error.
This question is really opinion-based, so I'll offer some opinions ... but also a justification for those opinions, which will hopefully be more useful for learning than the opinions themselves.
Is it always wise to use NULL after a delete in legacy code without any smartpointers to prevent dangling pointers? (bad design architecture of the legacy code excluded)
Short answer: no.
It is generally recommended to avoid raw pointers whenever possible. Regardless of which C++ standard your code claims compliance with.
Even if you somehow find yourself needing to use a raw pointer, it is safer to ensure the pointer ceases to exist when no longer needed, rather than setting it to NULL. That can be achieved with scope (e.g. the pointer is local to a scope, and that scope ends immediately after delete pointer - which absolutely prevents subsequent use of the pointer at all). If a pointer cannot be used when no longer needed, it cannot be accidentally used - and does not need to be set to NULL. This also works for a pointer that is a member of a class, since the pointer ceases to exist when the containing object does i.e. after the destructor completes.
The idiom of "set a pointer to NULL when no longer needed, and check for NULL before using it" doesn't prevent stupid mistakes. As a rough rule, any idiom that requires a programmer to remember to do something - such as setting a pointer to NULL, or comparing a pointer to NULL - is vulnerable to programmer mistakes (forgetting to do what they are required to do).
Does it also make sense in destructors?
Generally speaking, no. Once the destructor completes, the pointer (assuming it is a member of the class) will cease to exist as well. Setting it to NULL immediately before it ceases to exist achieves nothing.
If you have a class with a destructor that, for some reason, shares the pointer with other objects (i.e. the value of the pointer remains valid, and presumably the object it points at, still exist after the destructor completes) then the answer may be different. But that is an exceedingly rare use case - and one which is usually probably better avoided, since it becomes more difficult to manage lifetime of the pointer or the object it points at - and therefore easier to introduce obscure bugs. Setting a pointer to NULL when done is generally not a solution to such bugs.
In a getter, does it make sense to test for NULL in second step? Or is it undefinied behavier anyway?
Obviously that depends on how the pointer was initialised. If the pointer is uninitialised, even comparing it with NULL gives undefined behaviour.
In general terms, I would not do it. There will presumably be some code that initialised the pointer. If that code cannot appropriately initialise a pointer, then that code should deal with the problem in a way that prevents your function being called. Examples may include throwing an exception, terminating program execution. That allows your function to safely ASSUME the pointer points at a valid object.
What about this formalism as an alternative? In which cases will it crash?
The "formalism" is identical to the previous one - practically the difference is stylistic. In both cases, if m_var is uninitialised, accessing its value gives undefined behaviour. Otherwise the behaviour of the function is well-defined.
A crash is not guaranteed in any circumstances. Undefined behaviour is not required to result in a crash.
If the caller exhibits undefined behaviour (e.g. if your function returns NULL the caller dereferences it anyway) there is nothing your function can do to prevent that.
The case you describe remains relatively simple, because the variable is described in a local scope.
But look for example at this scenario:
struct MyObject
{
public :
MyObject (int i){ m_piVal = new int(i); };
~MyObject (){
delete m_piVal;
};
public:
static int *m_piVal;
};
int* MyObject::m_piVal = NULL;
You may have a double free problem by writing this:
MyObject *pObj1 = new MyObject(1);
MyObject *pObj2 = new MyObject(2);
//...........
delete pObj1;
delete pObj2; // You will have double Free on static pointer (m_piVal)
Or here:
struct MyObject2
{
public :
MyObject2 (int i){ m_piVal = new int(i); };
~MyObject2 (){
delete m_piVal;
};
public:
int *m_piVal;
};
when you write this:
MyObject2 Obj3 (3);
MyObject2 Obj4 = Obj3;
At destruction, you will have double Free here because Obj3.m_piVal = Obj4.m_piVal
So there are some cases that need special attention (Implement : smart pointer, copy constructor, ...) to manage the pointer
According to the first answer in this article: Explicitly deleting a shared_ptr
Is it possible to force delete a std::shared_ptr and the object it manages like below code?
do {
ptr.reset();
} while (!ptr.unique());
ptr.reset(); // To eliminate the last reference
Technically, this should try calling std::shared_ptr::reset if the pointer has more than 1 reference count, unless it reaches to one. Any thoughts on this?
This code doesn't make any sense.
Once you reset ptr, it doesn't manage an object anymore. If ptr was the only shared_ptr sharing ownership, then you're done. If it wasn't... well, you don't have access to all those other ones. Calling reset() on a disengaged shared_ptr is effectively a noop - there's nothing more to reset.
Imagine a simple scenario:
std::shared_ptr<int> a = std::make_shared<int>(42);
std::shared_ptr<int> b = a; // a and b are sharing ownership of an int
do {
a.reset();
} while (!a.unique());
The only way to reset b is to reset b - this code will reset a only, it cannot possibly reach b.
Also note that unique() was deprecated in C++17 and is removed entirely in C++20. But if even if you use use_count() instead, once you do a.reset(), a.use_count() will be equal to 0 because a no longer points to an object.
No this is not possible (or desirable). The point of a shared pointer is that if you have one you can guarantee the object it points to (if any) will not disappear from under you until (at least) you have finished with it.
Calling ptr.reset() will only reduce the reference count by 1 - being your shared pointer's reference. It will never affect other references from other shared pointers that are sharing your object.
We came across something we can not explain at work, and even if we found a solution, i would like to know exactly why the first code was fishy.
Here a minimal code example :
#include <iostream>
#include <memory>
#include <vector>
#include <algorithm>
int main() {
std::vector<std::shared_ptr<int>> r;
r.push_back(std::make_shared<int>(42));
r.push_back(std::make_shared<int>(1337));
r.push_back(std::make_shared<int>(13));
r.push_back(std::make_shared<int>(37));
int* s = r.back().get();
auto it = std::find(r.begin(),r.end(),s); // 1 - compliation error
auto it = std::find(r.begin(),r.end(),std::shared_ptr<int>(s)); // 2 - runtime error
auto it = std::find_if(r.begin(),r.end(),[s](std::shared_ptr<int> i){
return i.get() == s;}
); // 3 -works fine
if(it == r.end())
cout << "oups" << endl;
else
cout << "found" << endl;
return 0;
}
So what i want to know is why the find are not working.
For the first one, it seems that shared_ptr do not have a comparison operator with raw pointers, can someone explain why ?
The second one seems to be a problem of ownership, multiple delete (when my local shared_ptr goes out of scope it delete my pointer), but what i don't understand is why the runtime error is during the find execution, the double delete should happen only on the vector destruction, any thoughts ?
I have the working solution with the find_if so what i really want is to understand why the first two are not working, not another working solution (but if you have a more elegant one, feel free to post).
For the first one, it seems that shared_ptr do not have a comparison
operator with raw pointers, can someone explain why ?
Subjective, but I certainly don't consider it a good idea for shared pointers to be comparable to raw pointers, and I think the authors of std::shared_ptr and the standard's committee agree with that sentiment.
The second one seems to be a problem of ownership, multiple delete
(when my local shared_ptr goes out of scope it delete my pointer), but
what i don't understand is why the runtime error is during the find
execution, the double delete should happen only on the vector
destruction, any thoughts ?
s is a pointer to an int that was allocated by make_shared as part of a block, together with the reference counting information. It's implementation defined how it actually was allocated, but you can be sure it was not with a simple unadorned new expression, because that would allocate a seperate int in its own memory location. i.e. it was not allocated in any of these ways:
p = new int;
p = new int(value);
p = new int{value};
Then you passed s to the constructor of a new shared_ptr (the shared_ptr you passed as an argument to std::find). Since you didn't pass a special deleter along with the pointer, the default deleter will be used. The default deleter will simply call delete on the pointer.
Since the pointer was not allocated with an unadorned new expression, calling delete on it is undefined behavior. Since the temporary shared_ptr will be destroyed at the end of the statement, and it believes it is the sole owner of the integer, delete will be called on the integer at the end of the statement. This is likely the cause of your runtime error.
Try the following, easier to reason about snippet, and you will likely run into the same problem:
auto p = std::make_shared<int>(10);
delete p.get(); // This will most likely cause the same error.
// It is undefined behavior though, so there
// are no guarantees on that.
The smart pointer class template std::shared_ptr<> only supports operators for comparison against other std::shared_ptr<> objects; not raw pointers. Specifically, these are supported in that case:
operator== - Equivalence
operator!= - Negated equivalence
operator< - Less-than
operator<= - Less-than or equivalent
operator> - Greater-than
operator>= - Greater-than or equivalent
Read here for more info
Regarding why in the first case, because it isn't just a question of value; its a question of equivalence. A std::shared_ptr<> cannot be considered equivalent or comparable to a raw address simply because that raw address may not be held within a shared pointer. And even if the addresses are value equivalent, that doesn't mean the source of the latter came from a properly reference-counted equivalence (i.e. another shared pointer). Interestingly, your second example exposes what happens when you try to rig that system.
Regarding the second case, constructing a shared pointer as you are will proclaim two independent shared pointers having independent ownership of the same dynamic resource. So ask yourself, which one gets to delete it ? Um... yeah. Only when you replicate the std::shared_ptr<> itself will the reference count material shared among shared pointers holding the same datum reference be properly managed, so your code in this case is just-plain wrong.
If you want to hunt a raw address down in a collection of shared pointers, your third method is exactly how you should do it.
Edit: Why does the ownership issue in case 2 render where it does?
Ok, I did some hunting, and it turns out its a runtime-thing (at least on my implementation). I would have to check to know for sure if this behavior (of std::make_shared) is hardened in the standard, but I doubt it). The bottom line is this. These two things:
r.push_back(new int(42));
and
r.push_back(std::make_shared<int>(42));
can do very different things. The former dynamically allocates a new int, then send its address off to the matching constructor for std::shared_ptr<iint>, which allocates its own shared reference data that manages referencing counting to the provided address. I.e. there are two distinct blocks of data from separate allocations.
But the latter does something different. It allocates the object and the shared reference data in the same memory block, using placement-new for the object itself and either move-construction or copy-construction depending on what is provided/appropriate. The result is there is one memory allocation, and it holds both the reference data and the object, the latter being an offset within the allocate memory. Therefore the pointer you're sending to your shared_ptr did not come from an allocation return value.
Try the first one, and i bet you'll see you're runtime error relocate to the destruction of the vector rather then the conclusion of the find.
bool operator ==(const std::shared_ptr<T>&, const T*) doesn't exist.
It is a bad usage of std::shared_ptr
it is like you do:
int* p = new int(42);
std::shared_ptr<int> sp1(p);
std::shared_ptr<int> sp2(p); // Incorrect, should be sp2(sp1)
// each sp1 and sp2 will delete p at end of scope -> double delete ...
That is, if I don't use the copy constructor, assignment operator, or move constructor etc.
int* number = new int();
auto ptr1 = std::shared_ptr<int>( number );
auto ptr2 = std::shared_ptr<int>( number );
Will there be two strong references?
According to the standard, use_count() returns 1 immediately after a shared_ptr is constructed from a raw pointer (ยง20.7.2.2.1/5). We can infer from this that, no, two shared_ptr objects constructed from raw pointers are not "aware" of each other, even if the raw pointers are the same.
Yes there will be two strong references, theres no global record of all shared pointers that it looks up to see if the pointer you're trying to cover is already covered by another smart pointer. (it's not impossible to make something like this yourself, but it's not something you should have to do)
The smart pointer creates it's own reference counter and in your case, there would be two separate ones keeping track of the same pointer.
So either smart pointer may delete the content without being aware of the fact that it is also held in another smart pointer.
Your code is asking for crash!
You cannot have two smart pointers pointing to the same actual object, because both will try to call its destructor and release memory when the reference counter goes to 0.
So, if you want to have two smart pointers pointing to the same object you must do:
auto ptr1 = make_shared<int>(10);
auto ptr2 = ptr1;