While I do understand why there is no operator== for shared_ptr and unique_ptr, I wonder why there is none for shared_ptr and weak_ptr. Especially since you can create a weak_ptr via a reference on shared_ptr.
I would assume that for 99% of the time you want lhs.get() == rhs.get(). I would now go forward and introduce that into my code unless someone can name me a good reason, why one should not do such a thing.
weak_ptr doesn' have a get() method because you need to explicitly lock the weak_ptr before you can access the underlying pointer. Making this explicit is a deliberate design decision. If the conversion were implicit it would be very easy to write code that would be unsafe if the last shared_ptr to the object were to be destroyed while the underlying pointer obtained from the weak_ptr was still being examined.
This boost page has a good description of the pitfalls and why weak_ptr has such a limited interface.
If you need to do a quick comparison, then you can do shared == weak.lock(). If the comparison is true then you know that weak must still be valid as you hold a separate shared_ptr to the same object. There is no such guarantee if the comparison returns false.
Because it has a cost.
A weak_ptr is like an observer, not a real pointer. To do any work with it you first need to obtain a shared_ptr from it using its lock() method.
This has the effect of acquiring ownership, but it as costly as copying a regular shared_ptr (count increment, etc...) so it is nothing trivial.
As such, by not providing ==, you are forced to step back and actually check whether you really need this or not.
As the other answers have pointed out, simply comparing the underlying pointers would be perilous. For one, consider the following scenario: a weak reference A exists to an object, which is subsequently deleted, and therefore the weak reference expires. Then, another object is allocated in the memory freed up by said deletion, which has the same address. Now the underlying pointers are the same, even though the weak pointer originally referred to a different object!
As the other answers have suggested, one way is to compare shared == weak.lock(). Since lock() will return nullptr (and not some bogus pointer) if the weak pointer expired, his works for identifying if they are equal (as long as shared != nullptr). However, there are two problems with this:
It stops working when the weak pointer expires, in which case the comparison changes; after the expiration, it will only return true if shared == nullptr. This can be dangerous in cases where the comparison must remain stable, such as when using it as a key in an unordered_map or unordered_set.
lock() is a relatively expensive operation.
Fortunately, there is a better way to do this. Both weak_ptr and shared_ptr also store a pointer to what is known as the control block, which is what stores the reference counts and outlives the original object for as long as references remain to it. To check whether they refer to the same object, all we need to do is compare the control block pointers. This can be done with the owner_before method:
template<class T>
bool owner_equals(std::shared_ptr<T> &lhs, std::weak_ptr<T> &rhs) {
return !lhs.owner_before(rhs) && !rhs.owner_before(lhs);
}
This approach will even work for comparing two std::weak_ptrs with each other, if you wish to know whether they (once) referred to the same object, since the control block will last (at least) as long as all of the weak references.
Do keep in mind that this may not produce the expected result if you are using the aliasing feature of std::shared_ptr, which is a feature that lets you create two std::shared_ptr instances with the same control block that nonetheless store different pointers.
Related
I've been reading quite a number of discussions about performance issues when smart pointers are involved in an application. One of the frequent recommendations is to pass a smart pointer as const& instead of a copy, like this:
void doSomething(std::shared_ptr<T> o) {}
versus
void doSomething(const std::shared_ptr<T> &o) {}
However, doesn't the second variant actually defeat the purpose of a shared pointer? We are actually sharing the shared pointer here, so if for some reasons the pointer is released in the calling code (think of reentrancy or side effects) that const pointer becomes invalid. A situation the shared pointer actually should prevent. I understand that const& saves some time as there is no copying involved and no locking to manage the ref count. But the price is making the code less safe, right?
The advantage of passing the shared_ptr by const& is that the reference count doesn't have to be increased and then decreased. Because these operations have to be thread-safe, they can be expensive.
You are quite right that there is a risk that you can have a chain of passes by reference that later invalidates the head of the chain. This happened to me once in a real-world project with real-world consequences. One function found a shared_ptr in a container and passed a reference to it down a call stack. A function deep in the call stack removed the object from the container, causing all the references to suddenly refer to an object that no longer existed.
So when you pass something by reference, the caller must ensure it survives for the life of the function call. Don't use a pass by reference if this is an issue.
(I'm assuming you have a use case where there's some specific reason to pass by shared_ptr rather than by reference. The most common such reason would be that the function called may need to extend the life of the object.)
Update: Some more details on the bug for those interested: This program had objects that were shared and implemented internal thread safety. They were held in containers and it was common for functions to extend their lifetimes.
This particular type of object could live in two containers. One when it was active and one when it was inactive. Some operations worked on active objects, some on inactive objects. The error case occurred when a command was received on an inactive object that made it active while the only shared_ptr to the object was held by the container of inactive objects.
The inactive object was located in its container. A reference to the shared_ptr in the container was passed, by reference, to the command handler. Through a chain of references, this shared_ptr ultimately got to the code that realized this was an inactive object that had to be made active. The object was removed from the inactive container (which destroyed the inactive container's shared_ptr) and added to the active container (which added another reference to the shared_ptr passed to the "add" routine).
At this point, it was possible that the only shared_ptr to the object that existed was the one in the inactive container. Every other function in the call stack just had a reference to it. When the object was removed from the inactive container, the object could be destroyed and all those references were to a shared_ptr that that no longer existed.
It took about a month to untangle this.
First of all, don't pass a shared_ptr down a call chain unless there is a possibility that one of the called functions will store a copy of it. Pass a reference to the referred object, or a raw pointer to that object, or possibly a box, depending on whether it can be optional or not.
But when you do pass a shared_ptr, then preferably pass it by reference to const, because copying a shared_ptr has additional overhead. The copying must update the shared reference count, and this update must be thread safe. Hence there is a little inefficiency that can be (safely) avoided.
Regarding
” the price is making the code less safe, right?
No. The price is an extra indirection in naïvely generated machine code, but the compiler manages that. So it's all about just avoiding a minor but totally needless overhead that the compiler can't optimize away for you, unless it's super-smart.
As David Schwarz exemplified in his answer, when you pass by reference to const the aliasing problem, where the function you call in turn changes or calls a function that changes the original object, is possible. And by Murphy's law it will happen at the most inconvenient time, at maximum cost, and with the most convoluted impenetrable code. But this is so regardless of whether the argument is a string or a shared_ptr or whatever. Happily it's a very rare problem. But do keep it in mind, also for passing shared_ptr instances.
First of all there is a semantic difference between the two:
passing shared pointer by value indicates your function is going to take its part of the underlying object ownership.
Passing shared_ptr as const reference does not indicate any intent on top of just passing the underlying object by const reference (or raw pointer) apart from inforcing users of this function to use shared_ptr. So mostly rubbish.
Comparing performance implications of those is irrelevant as long as they are semantically different.
from https://herbsutter.com/2013/06/05/gotw-91-solution-smart-pointer-parameters/
Don’t pass a smart pointer as a function parameter unless you want to
use or manipulate the smart pointer itself, such as to share or
transfer ownership.
and this time I totally agree with Herb :)
And another quote from the same, which answers the question more directly
Guideline: Use a non-const shared_ptr& parameter only to modify the shared_ptr. Use a const shared_ptr& as a parameter only if you’re not sure whether or not you’ll take a copy and share ownership; otherwise use * instead (or if not nullable, a &)
As pointed out in C++ - shared_ptr: horrible speed, copying a shared_ptr takes time. The construction involves an atomic increment and the destruction an atomic decrement, an atomic update (whether increment or decrement) may prevent a number of compiler optimizations (memory loads/stores cannot migrate across the operation) and at hardware level involves the CPU cache coherency protocol to ensure that the whole cache line is owned (exclusive mode) by the core doing the modification.
So, you are right, std::shared_ptr<T> const& may be used as a performance improvement over just std::shared_ptr<T>.
You are also right that there is a theoretical risk for the pointer/reference to become dangling because of some aliasing.
That being said, the risk is latent in any C++ program already: any single use of a pointer or reference is a risk. I would argue that the few occurrences of std::shared_ptr<T> const& should be a drop in the water compared to the total number of uses of T&, T const&, T*, ...
Lastly, I would like to point that passing a shared_ptr<T> const& is weird. The following cases are common:
shared_ptr<T>: I need a copy of the shared_ptr
T*/T const&/T&/T const&: I need a (possibly null) handle to T
The next case is much less common:
shared_ptr<T>&: I may reseat the shared_ptr
But passing shared_ptr<T> const&? Legitimate uses are very very rare.
Passing shared_ptr<T> const& where all you want is a reference to T is an anti-pattern: you force the user to use shared_ptr when they could be allocating T another way! Most of the times (99,99..%), you should not care how T is allocated.
The only case where you would pass a shared_ptr<T> const& is if you are not sure whether you will need a copy or not, and because you have profiled the program and showed that this atomic increment/decrement was a bottleneck you have decided to defer the creation of the copy to only the cases where it is needed.
This is such an edge case that any use of shared_ptr<T> const& should be viewed with the highest degree of suspicion.
If no modification of ownership is involved in your method, there's no benefit for your method to take a shared_ptr by copy or by const reference, it pollutes the API and potentially incur overhead (if passing by copy)
The clean way is to pass the underlying type by const ref or ref depending of your use case
void doSomething(const T& o) {}
auto s = std::make_shared<T>(...);
// ...
doSomething(*s);
The underlying pointer can't be released during the method call
I think its perfectly reasonable to pass by const & if the target function is synchronous and only makes use of the parameter during execution, and has no further need of it upon return. Here it is reasonable to save on the cost of increasing the reference count - as you don't really need the extra safety in these limited circumstances - provided you understand the implications and are sure the code is safe.
This is as opposed to when the function needs to save the parameter (for example in a class member) for later re-reference.
std::unique_ptr is a smart pointer that retains sole ownership of an object through a pointer and destroys that object when the unique_ptr goes out of scope. No two unique_ptr instances can manage the same object.
How the last statement is ensured?
I don't believe that there is "someone" is STL who checks if one of the already existing std::unique_ptrs already own the raw pointer. This would be very inefficient with huge number of unique pointers, even if it is a linear complexity algorithm. There should be a nice trick, right?
It isn't ensured. The name is a statement of intended usage, not any guarantee fully enforced by a runtime system. That is, you can write this code:
std::unique_ptr<int> i1(new int());
std::unique_ptr<int> i2(i1.get());
and you have two unique_ptrs referring to the same object, but the program has undefined behavior because it will delete the pointer twice.
unique_ptr is not copyable to make it harder to create two such pointers by accident. C++ protects against Murphy, not Machiavelli.
The reason it's called unique is because you can't copy a unique pointer. You can steal its value, but that leaves the original unique_ptr empty.
Please take into account my inexperience, but I do not understand the point of std::owner_less.
I have been shown that a map with weak_ptr as key is not recommended because an expired weak_ptr key will break the map, actually:
If it expires, then the container's order is broken, and trying to use the container afterwards will give undefined behaviour.
How undefined is that behavior? The reason I ask is because the docs say about owner_less:
This function object provides owner-based (as opposed to value-based) mixed-type ordering of both std::weak_ptr and std::shared_ptr. The order is such that two smart pointers compare equivalent only if they are both empty or if they both manage the same object, even if the values of the raw pointers obtained by get() are different (e.g. because they point at different subobjects within the same object)
Again, this is my inexperience talking, but it doesn't sound like the map will be completely broken by an expired weak_ptr:
Returns whether the weak_ptr object is either empty or there are no more shared_ptr in the owner group it belongs to.
Expired pointers act as empty weak_ptr objects when locked, and thus can no longer be used to restore an owning shared_ptr.
It sounds like it could become more flabby than completely undefined. If one's implementation removes expired weak_ptrs and simply doesn't or has no use for any lingering ones, when does the behavior become undefined?
If one's implementation has no regard for order, yet only needs a convenient way to associate weak_ptrs with data, is the behavior still undefined? In other words, will find start to return the wrong key?
Map
The only problem that I can find in the docs is what's referenced above, that expired weak_ptrs will return equivalent.
According to these docs, this isn't a problem for implementations that do not rely on ordering nor have use for expired weak_ptrs:
Associative
Elements in associative containers are referenced by their key and not by their absolute position in the container.
Ordered
The elements in the container follow a strict order at all times. All inserted elements are given a position in this order.
Map
Each element associates a key to a mapped value: Keys are meant to identify the elements whose main content is the mapped value.
That sounds like if an implementation is not concerned with order nor has use for expired weak_ptrs then there is no problem because values are referenced by key not by order, so finding an expired weak_ptr will return possibly another weak_ptrs value, but since there's no use for it in this particular implementation except to be erased, there's no problem.
I can see how a need to use weak_ptr ordering or expired weak_ptrs could be a problem, whatever application that may be, but all behavior seems far from undefined, so a map or set does not seem to be totally broken by an expired weak_ptr.
Are there more technical explanations of map, weak_ptr, and owner_less that refute these docs and my interpretation?
One point of clarification. Expired weak_ptr's are not UB when using owner_less. From the standard
under the equivalence relation defined by operator(), !operator()(a,
b) && !operator()(b, a), two shared_ptr or weak_ptr instances are
equivalent if and only if they share ownership or are both empty.
One thing to remember is that an empty weak_ptr is one that has never been assigned a valid shared_ptr, or one which has been assigned an empty shared_ptr/weak_ptr. A weak_ptr that has expired is not an empty weak_ptr.
Edit:
The definition above hinges on what does it mean to have an "empty" weak_ptr. So, let's look at the standard
constexpr weak_ptr() noexcept;
Effects: Constructs an empty weak_ptr object.
Postconditions: use_count() == 0.
weak_ptr(const weak_ptr& r) noexcept;
template weak_ptr(const weak_ptr& r) noexcept;
template weak_ptr(const shared_ptr& r) noexcept;
Requires: The second and third constructors shall not participate in the overload resolution unless Y* is implicitly convertible to T*.
Effects: If r is empty, constructs an empty
weak_ptr object; otherwise, constructs a weak_ptr object that shares
ownership with r and stores a copy of the pointer stored in r.
Postconditions: use_count() == r.use_count().
Swapping simply exchanges contents, and assignment is defined as the above constructors plus a swap.
To create an empty weak_ptr, you use the default constructor, or pass it a weak_ptr or shared_ptr that is empty. Now, you'll note expiration doesn't actually cause a weak_ptr to become empty. It simply causes it to have a use_count() of zero and expired() to return true. This is because the underlying reference count cannot be released until all the weak pointers that shared the object are also released.
Here is a minimal example which demonstrates the same problem:
struct Character
{
char ch;
};
bool globalCaseSensitive = true;
bool operator< (const Character& l, const Character& r)
{
if (globalCaseSensitive)
return l.ch < r.ch;
else
return std::tolower(l.ch) < std::tolower(r.ch);
}
int main()
{
std::set<Character> set = { {'a'}, {'B'} };
globalCaseSensitive = false; // change set ordering => undefined behaviour
}
map and set require that their key comparator implement a strict weak ordering relation over their key type. This means that, among other things, if x is less than y then x is always less than y. If the program does not guarantee that, then the program exhibits undefined behaviour.
We can fix this example by providing a custom comparator which ignores the case sensitivity switch:
struct Compare
{
bool operator() (const Character& l, const Character& r)
{
return l.ch < r.ch;
}
};
int main()
{
std::set<Character, Compare> set = { {'a'}, {'B'} };
globalCaseSensitive = false; // set ordering is unaffected => safe
}
If a weak_ptr expires, then that weak_ptr will subsequently compare differently to others due to it being null, and can no longer guarantee a strict weak ordering relation. In this case, the fix is the same: use a custom comparator which is immune to changes in shared state; owner_less is one such comparator.
How undefined is that behavior?
Undefined is undefined. There is no continuum.
If one's implementation [...] when does the behavior become undefined?
As soon as the contained elements cease to have a well-defined strict weak ordering relation.
If one's implementation [...] is the behavior still undefined? In other words, will find start to return the wrong key?
Undefined behaviour is not restricted to just returning the wrong key. It could do anything.
That sounds like [...] there is no problem because values are referenced by key not by order.
Without ordering, keys lack the intrinsic ability to reference values.
std::sort requires an ordering as well. owner_less on it could be useful.
In a map or set less so -- putting a weak_ptr as the key to either is courting undefined behaviour. As you will habe to sync lifetime of the container and the pointer manually anyway, you may as well use a raw pointer (or a hand rolled non owning smart pointer that somehow handles the expiration problem) to make that clearer.
I was wondering if i need to check whether sp is null before i use it.
Correct me if I am wrong but creating an alias will not increase the ref counter and therefore by entering into the method we are working with a shared pointer which we don't know if the embedded pointer has been reset before.. am I correct by assuming this?
Class::MyFunction(std::shared_ptr<foo> &sp)
{
...
sp->do_something();
...
}
You have to consider that std::shared_ptr is overall still a pointer (encapsulated in a pointer like class) and that it can indeed be constructed to internally be nullptr. When that happens, expressions like:
ptr->
*ptr
leads to undefined behavior. So, yeah, if you are expecting the pointer to also be nullptr, then you should check for its value with:
ptr != nullptr
or
!ptr
(thanks to its operator bool).
Most shared pointers are exactly like normal pointers in this
respect. You have to check for null. Depending on the
function, you may want to switch to using
void myFunction( Foo const& foo );
, and calling it by dereferencing the pointer (which pushes the
responsibility for ensuring that the pointer is not null to the
caller).
Also, it's probably bad practice to make the function take
a shared pointer unless there are some special ownership
semantics involved. If the function is just going to use the
pointer for the duration of the function, neither changing it or
taking ownership, a raw pointer is probably more appropriate,
since it imposes less constraints on the caller. (But this
really depends a lot on what the function does, and why you are
using shared pointers. And of course, the fact that you've
passed a non-const reference to the shared pointer supposes that
you are going to modify it, so passing a shared pointer might be
appropriate.)
Finally, different implementations of shared pointers make it
more or less difficult to check for null. With C++11, you can
use std::shared_ptr, and just compare it to nullptr
naturally, as you'd expect. The Boost implementation is a bit
broken in this respect, however; you cannot just compare it to
0 or NULL. You must either construct an empty
boost::shared_ptr for the comparison, or call get on it and
compare the resulting raw pointer to 0 or NULL.
There is no point in passing a shared_ptr as reference.
You can obtain the internal object via boost::shared_ptr<T>.get() and check for nullptr
Also relevant: move to std :)
Edit: This is the implementation: http://www.boost.org/doc/libs/1_55_0/boost/smart_ptr/shared_ptr.hpp
And here is a SO thread about ref or no ref: Should I pass a shared_ptr by reference?
It uses move semantics when Cx11 and copies two ints otherwise which is slower than passing a reference but when is somebody on this level of optimization?
There's no general answer to this question. You have to treat it just like any other pointer. If you don't know whether it's null, test. If you believe it to never be null, assert() that it's not null and use it directly.
The fact that you have a reference to shared_ptr, or even that you have a shared_ptr, has no impact here.
I have a program that uses boost::shared_ptrs and, in particular, relies on the accuracy of the use_count to perform optimizations.
For instance, imagine an addition operation with two argument pointers called lhs and rhs. Say they both have the type shared_ptr<Node>. When it comes time to perform the addition, I'll check the use_count, and if I find that one of the arguments has a reference count of exactly one, then I'll reuse it to perform the operation in place. If neither argument can be reused, I must allocate a new data buffer and perform the operation out-of-place. I'm dealing with enormous data structures, so the in-place optimization is very beneficial.
Because of this, I can never copy the shared_ptrs without reason, i.e., every function takes the shared_ptrs by reference or const reference to avoid distorting use_count.
My question is this: I sometimes have a shared_ptr<T> & that I want to cast to shared_ptr<T const> &, but how can I do it without distorting the use count? static_pointer_cast returns a new object rather than a reference. I'd be inclined to think that it would work to just cast the whole shared_ptr, as in:
void f(shared_ptr<T> & x)
{
shared_ptr<T const> & x_ = *reinterpret_cast<shared_ptr<T const> *>(&x);
}
I highly doubt this complies with the standard, but, as I said, it will probably work. Is there a way to do this that's guaranteed safe and correct?
Updating to Focus the Question
Critiquing the design does not help answer this post. There are two interesting questions to consider:
Is there any guarantee (by the writer of boost::shared_ptr, or by the standard, in the case of std::tr1::shared_ptr) that shared_ptr<T> and shared_ptr<T const> have identical layouts and behavior?
If (1) is true, then is the above a legal use of reinterpret_cast? I think you would be hard-pressed to find a compiler that generates failing code for the above example, but that doesn't mean it's legal. Whatever your answer, can you find support for it in the C++ standard?
I sometimes have a shared_ptr<T> & that I want to cast to shared_ptr<T const> &, but how can I do it without distorting the use count?
You don't. The very concept is wrong. Consider what happens with a naked pointer T* and const T*. When you cast your T* into a const T*, you now have two pointers. You don't have two references to the same pointer; you have two pointers.
Why should this be different for smart pointers? You have two pointers: one to a T, and one to a const T. They're both sharing ownership of the same object, so you are using two of them. Your use_count therefore ought to be 2, not 1.
Your problem is your attempt to overload the meaning of use_count, co-opting its functionality for some other purpose. In short: you're doing it wrong.
Your description of what you do with shared_ptrs who's use_count is one is... frightening. You're basically saying that certain functions co-opt one of its arguments, which the caller is clearly using (since the caller obviously is still using it). And the caller doesn't know which one was claimed (if any), so the caller has no idea what the state of the arguments is after the function. Modifying the arguments for operations like that is usually not a good idea.
Plus, what you're doing can only work if you pass shared_ptr<T> by reference, which itself isn't a good idea (like regular pointers, smart pointers should almost always be taken by value).
In short, you're taking a very commonly used object with well-defined idioms and semantics, then requiring that it be used in a way that they are almost never used, with specialized semantics that work counter to the way everyone actually uses them. That's not a good thing.
You have effectively created the concept of co-optable pointer, a shared pointer that can be in 3 use states: empty, in use by the person who gave it to you only and thus you can steal from it, and in use by more than one person so you can't have it. It's not the semantics that shared_ptr exists to support. So you should write your own smart pointer that provides these semantics in a much more natural way.
Something that recognizes the difference between how many instances of a pointer you have around and how many actual users of it you have. That way, you can pass it around by value properly, but you have some way of saying that you are currently using it and don't want one of these other functions to claim it. It could use shared_ptr internally, but it should provide its own semantics.
static_pointer_cast is the right tool for the job — you've already identified that.
The problem with it isn't that it returns a new object, but rather that it leaves the old object unchanged. You want to get rid of the non-const pointer and move on with the const pointer. What you really want is static_pointer_cast< T const >( std::move( old_ptr ) ). But there isn't an overload for rvalue references.
The workaround is simple: manually invalidate the old pointer just as std::move would.
auto my_const_pointer = static_pointer_cast< T const >( modifiable_pointer );
modifiable_pointer = nullptr;
It might be slightly slower than reinterpret_cast, but it's a lot more likely to work. Don't underestimate how complex the library implementation is, and how it can fail when abused.
An aside: use pointer.unique() instead of use_count() == 1. Some implementations might use a linked list with no cached use count, making use_count() O(N) whereas the unique test remains O(1). The Standard recommends unique for copy on write optimization.
EDIT: Now I see you mention
I can never copy the shared_ptrs without reason, i.e., every function takes the shared_ptrs by reference or const reference to avoid distorting use_count.
This is Doing It Wrong. You've added another layer of ownership semantics atop what shared_ptr already does. They should be passed by value, with std::move used where the caller no longer desires ownership. If the profiler says you're spending time adjusting reference counts, then you might add some references-to-pointer in the inner loops. But as a general rule, if you can't set a pointer to nullptr because you're no longer using it, but someone else might be, then you've really lost track of ownership.
If you cast a shared_ptr to a different type, without changing the reference count, this implies that you'll now have two pointers to the same data. Hence, unless you erase the old pointer, you can't do this with shared_ptrs without "distorting the reference count".
I would suggest that you use raw pointers here instead, rather than going out of your way to not use the features of shared_ptrs. If you need to sometimes create new references, use enable_shared_from_this to derive a new shared_ptr to an existing raw pointer.
When it comes time to perform the addition, I'll check the use_count, and if I find that one of the arguments has a reference count of exactly one, then I'll reuse it to perform the operation in place.
This isn't necessarily valid unless you're applying some other rules across the whole program to make it so. Consider:
shared_ptr<Node> add(shared_ptr<Node> const &lhs,shared_ptr<Node> const &rhs) {
if(lhs.use_count()==1) {
// do whatever, reusing lhs
return lhs;
}
if(rhs.use_count()==1) {
// do whatever, reusing rhs
return rhs;
}
shared_ptr<Node> new_node = ... // do whatever without reusing lhs or rhs
return new_node;
}
void foo() {
shared_ptr<Node> a,b;
shared_ptr<Node> c = add(a,b);
// error, we still have a and b, and expect that they're unchanged! they could have been modified!
}
Instead if you take the smart pointers by value:
shared_ptr<Node> add(shared_ptr<Node> lhs,shared_ptr<Node> rhs) {
And the use_count()==1 then you know that your copy is the only one and it should be safe to reuse it.
However, there's a problem in using this as an optimization, because copying a shared_ptr requires synchronization. It could well be that doing all this synchronization all over the place costs far more than you save by reusing existing shared_ptrs. All this synchronization is the reason it's recommended that code that does not take ownership of a shared_ptr should take the shared_ptr by reference instead of by value.