This question already has answers here:
How do I pass smart pointers into functions?
(4 answers)
Should/Can smart pointers be passed by reference in functions
(2 answers)
Whether to pass shared pointer or raw pointer to a function
(2 answers)
Passing the address of dereferenced smart pointers to functions that expect raw pointers
(4 answers)
When should I use raw pointers over smart pointers?
(8 answers)
Closed 11 months ago.
I am trying to refactor some oldish code, and I want to use unique_ptrs for some objects where they are clearly suited. Up till now, shared_ptrs have been generally used.
Given that for most intents and purposes both smart pointers behave identically, in many cases I don't see why I should have to distinguish between the two. So to make a trivial example:
EDIT: I've had to make the object a little less trivial...
class NamedItem
{
string name;
string& GetName();
}
class SessionObject: public NamedItem
{}
class TrivialObject: public NamedItem
{}
class NameCacher:
{
vector<??????<NamedItem>> named_items;
void AddNamedItem(??????<NamedItem>& named_item)
{
named_items.push_back(named_item);
}
void PrintAllNamedItems()
{
// Print all names
}
}
unique_ptr<SessionObject> session(new SessionObject("the session"));
shared_ptr<TrivialObject> some_object(new TrivialObject("whatever"));
NameCacher names();
names.AddNamedItem(session); // The session pointer will not delete the session object, even if names stops referencing it.
names.AddNamedItem(some_object); // The some_object pointer is welcome to delete itself if names stops referencing it and nothing else is.
names.PrintAllNamedItems();
// If some_object goes out of scope, then its shared_ptr will delete it at this point.
Given that 80% of the day-to-day behaviour of the smart pointers is the same, isn't there a way to do this? The only thing I've found is to convert a unique_ptr to a shared_ptr - which is categorically not what I want to do. Ideally, I'd like the base class of the two smart pointers - but I can't find one.
Many thanks to all those who have responded to my question. I've been speaking with a knowledgeable colleague as well, and it's taken us about an hour to get a common understanding of the whole situation - so my apologies for not being able to convey this in my simplified example.
I thought I'd add this as an answer to explain to future readers why the premise of my question was ill-conceived. This attempts to summarise some of the comments on the original question.
I believe that I had misunderstood the utility of unique_ptrs, and had been using them incorrectly. What I had originally wanted was a pointer that behaved as a shared_ptr does, but did not need to manage its reference count - because I could guarantee that it would stay alive for the entire session. As such, the code which referenced it could treat it the same as a normal shared pointer - it is just didn't need to increment or decrement the reference count.
However, the purpose of a unique_ptr is that it's ownership can be transferred - and in my example above I am attempting to send it to another object while not transferring its ownership. As several commenters pointed out, this could be done much better by dereferencing it as a raw pointer or a reference as it is given to the recipient - but this would be a very different intention to when a shared_ptr is provided. As such, they shouldn't have a common interface as I had originally asked.
My thanks again for everyone who helped me understand my mistake.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
In some code I have been working on, I am passing pointers into classes that aren't necessarily managed specifically by the class to which they are passed. If the class is destroyed then I check to see if the pointer's memory has or has not already been deallocated. The problem I have is that, if a pointer is deallocated and set to NULL before the class's destructor is called then I end up with a dangling pointer. The class ends up seeing the pointer is still non-NULL and tries to delete it which causes a segmentation fault. The best solution I could think of for this is to store the pointer by reference as shown below:
class PtrReferenceClass {
public:
PtrReferenceClass(int*& i_) : i(i_) {}
void run() {
if(i == NULL)
cout << "pointer is null\n";
else
cout << "pointer isn't null\n";
}
int*& i;
};
int main() {
int* i = new int(5);
PtrReferenceClass test(i);
test.run();
delete i;
i = NULL;
test.run();
return 0;
}
As expected, the output is:
pointer isn't null
pointer is null
Ofcourse when the pointer isn't store by reference I end up with a dangling pointer.
My question is as to whether or not this is generally considered to be a good programming practice. Are there any drawbacks to this solution or is there a better convention?
It depends upon what you are trying to accomplish.
If you want the memory around for your class use C++11's std::shared_ptr everywhere instead of an int*.
If you don't need the memory around for your class use C++11's std::weak_ptr.
As far as holding onto pointers in a class, that's not bad if they're wrapped in one of C++'s pointer wrappers. You can just hang onto raw pointers, but in general you should only do that if speed is an extreme concern.
You could check for NULLness in the destructor of PtrReferenceClass. A much better alternative whould be to use a shared_ptr or really clarify ownership of i.
Agree with #Paranaix in main thread comment, as well as #ToniBig, I can't really think of a situation where you would need this. Such a thing is probably to protect against horrible programmer error. You should also keep in mind that you are storing a reference to the pointer i, and that reference will be left dangling when the pointer i goes out of scope, regardless of whether the memory i refers to has been deallocated or not. SO in conclusion, please don't do this.
All you've done is trade one lifetime problem for another. The new problem may be easier to solve... or it may not.
Now you can detect that the object is gone... as long as something has kept the pointer variable alive.
Think carefully about your variable lifetimes, and whether a reference-to-pointer (or equivalently, pointer to pointer) makes sense should become clear.
There certainly are cases where double indirection is valuable. I will leave you with a quote: "Any problem in computer science can be solved by adding another layer of indirection"
These are options :
give ownership to class and manage lifecycle inside it.plus with safe
setter method for changing it when you want. Again do it if you have to create or obtain that pointer outside,otherwise just do all inside.
only pass that pointer to methods that will use it and when
needed.void run(int* i).
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C++ Objects: When should I use pointer or reference
I just came from Java and is new to C++. Over the course of a month, I managed to teach myself C++. I've coded some basic stuff here and there, and understand some of the concepts (Polymorphism, Virtual Functions etc). Although I know how Pointers work, I'm still having trouble knowing WHEN to use them.
I know that when you want to create something out on the heap using new, you have to use a pointer, but I fail to recognize other situations in which pointers and references should be used. Is there some sort of rule of thumb on when to use pointers I should know about? Like when should function parameters have & or * in them etc. Sorry for the noob question.
Same answer in other So question (http://stackoverflow.com/questions/7058339/c-when-to-use-references-vs-pointers):
Use reference wherever you can, pointers wherever you must.
Avoid pointers until you can't.
The reason is that pointers makes things harder to follow/read, less safe and far more dangerous manipulations than any other constructs.
So the rule of thumbs is to use pointers only if there is no other choice.
For example, returning a pointer to an object is a valid option when the function can return nullptr in some cases and it is assumed it will. That said, a better option would be to use something similar to boost::optional.
Another example is to use pointers to raw memory for specific memory manipulations. That should be hidden and localized in very narrow parts of the code, to help limit the dangerous part of the whole code base.
In your example, there is no point in using a pointer as parameter because :
if you provide nullptr as parameter, you're going in undefined-behaviour-land;
the reference attribute version don't allow (without easy to spot tricks) the problem with 1.
the reference attribute version is simpler to understand for the user : you have to provide a valid object, not something that could be null.
If the behaviour of the function would have to work with or without a given object, then using a pointer as attribute suggest that you can pass nullptr as parameter and it is fine for the function. That's a kind of contract between the user and the implementation code.
Use references as function arguments (simple, efficient, (mostly) safe).
Use pointers as non-owning members of objects (reassignment tends to make more sense with pointers than with references).
Use smart pointers for owning heap-allocated objects (and avoid heap allocation for most objects).
Pointers can also be used where nullptr is a desirable possible value for the pointer and in situations where you want to use pointer arithmetic.
I recently came across some C++ code that looked like this:
class SomeObject
{
private:
// NOT a pointer
BigObject foobar;
public:
BigObject * getFoobar() const
{
return &foobar;
}
};
I asked the programmer why he didn't just make foobar a pointer, and he said that this way he didn't have to worry about allocating/deallocating memory. I asked if he considered using some smart pointer, he said this worked just as well.
Is this bad practice? It seems very hackish.
That's perfectly reasonable, and not "hackish" in any way; although it might be considered better to return a reference to indicate that the object definitely exists. A pointer might be null, and might lead some to think that they should delete it after use.
The object has to exist somewhere, and existing as a member of an object is usually as good as existing anywhere else. Adding an extra level of indirection by dynamically allocating it separately from the object that owns it makes the code less efficient, and adds the burden of making sure it's correctly deallocated.
Of course, the member function can't be const if it returns a non-const reference or pointer to a member. That's another advantage of making it a member: a const qualifier on SomeObject applies to its members too, but doesn't apply to any objects it merely has a pointer to.
The only danger is that the object might be destroyed while someone still has a pointer or reference to it; but that danger is still present however you manage it. Smart pointers can help here, if the object lifetimes are too complex to manage otherwise.
You are returning a pointer to a member variable not a reference. This is bad design.
Your class manages the lifetime of foobar object and by returning a pointer to its members you enable the consumers of your class to keep using the pointer beyond the lifetime of SomeObject object. And also it enables the users to change the state of SomeObject object as they wish.
Instead you should refactor your class to include the operations that would be done on the foobar in SomeObject class as methods.
ps. Consider naming your classes properly. When you define it is a class. When you instantiate, then you have an object of that class.
It's generally considered less than ideal to return pointers to internal data at all; it prevents the class from managing access to its own data. But if you want to do that anyway I see no great problem here; it simplifies the management of memory.
Is this bad practice? It seems very hackish.
It is. If the class goes out of scope before the pointer does, the member variable will no longer exist, yet a pointer to it still exists. Any attempt to dereference that pointer post class destruction will result in undefined behaviour - this could result in a crash, or it could result in hard to find bugs where arbitrary memory is read and treated as a BigObject.
if he considered using some smart pointer
Using smart pointers, specifically std::shared_ptr<T> or the boost version, would technically work here and avoid the potential crash (if you allocate via the shared pointer constructor) - however, it also confuses who owns that pointer - the class, or the caller? Furthermore, I'm not sure you can just add a pointer to an object to a smart pointer.
Both of these two points deal with the technical issue of getting a pointer out of a class, but the real question should be "why?" as in "why are you returning a pointer from a class?" There are cases where this is the only way, but more often than not you don't need to return a pointer. For example, suppose that variable needs to be passed to a C API which takes a pointer to that type. In this case, you would probably be better encapsulating that C call in the class.
As long as the caller knows that the pointer returned from getFoobar() becomes invalid when the SomeObject object destructs, it's fine. Such provisos and caveats are common in older C++ programs and frameworks.
Even current libraries have to do this for historical reasons. e.g. std::string::c_str, which returns a pointer to an internal buffer in the string, which becomes unusable when the string destructs.
Of course, that is difficult to ensure in a large or complex program. In modern C++ the preferred approach is to give everything simple "value semantics" as far as possible, so that every object's life time is controlled by the code that uses it in a trivial way. So there are no naked pointers, no explicit new or delete calls scattered around your code, etc., and so no need to require programmers to manually ensure they are following the rules.
(And then you can resort to smart pointers in cases where you are totally unable to avoid shared responsibility for object lifetimes.)
Two unrelated issues here:
1) How would you like your instance of SomeObject to manage the instance of BigObject that it needs? If each instance of SomeObject needs its own BigObject, then a BigObject data member is totally reasonable. There are situations where you'd want to do something different, but unless that situation arises stick with the simple solution.
2) Do you want to give users of SomeObject direct access to its BigObject? By default the answer here would be "no", on the basis of good encapsulation. But if you do want to, then that doesn't change the assessment of (1). Also if you do want to, you don't necessarily need to do so via a pointer -- it could be via a reference or even a public data member.
A third possible issue might arise that does change the assessment of (1):
3) Do you want to give users of SomeObject direct access to an instance of BigObject that they continue using beyond the lifetime of the instance of SomeObject that they got it from? If so then of course a data member is no good. The proper solution might be shared_ptr, or for SomeObject::getFooBar to be a factory that returns a different BigObject each time it's called.
In summary:
Other than the fact it doesn't compile (getFooBar() needs to return const BigObject*), there is no reason so far to suppose that this code is wrong. Other issues could arise that make it wrong.
It might be better style to return const & rather than const *. Which you return has no bearing on whether foobar should be a BigObject data member.
There is certainly no "just" about making foobar a pointer or a smart pointer -- either one would necessitate extra code to create an instance of BigObject to point to.
I came accross several questions where answers state that using T* is never the best idea.
While I already make much use of RIIC, there is one particular point in my code, where I use T*. Reading about several auto-pointers, I couldn't find one where I'd say that I have a clear advantage from using it.
My scenario:
class MyClass
{
...
// This map is huge and only used by MyClass and
// and several objects that are only used by MyClass as well.
HashMap<string, Id> _hugeIdMap;
...
void doSomething()
{
MyMapper mapper;
// Here is what I pass. The reason I can't pass a const-ref is
// that the mapper may possibly assign new IDs for keys not yet in the map.
mapper.setIdMap(&_hugeIdMap);
mapper.map(...);
}
}
MyMapper now has a HashMap<...>* member, which - according to highly voted answers in questions on unrelated problems - never is a good idea (Altough the mapper will go out of scope before the instance of MyClass does and hence I do not consider it too much of a problem. There's no new in the mapper and no delete will be needed).
So what is the best alternative in this particular use-case?
Personally I think a raw pointer (or reference) is okay here. Smart pointers are concerned with managing the lifetime of the object pointed to, and in this case MyMapper isn't managing the lifetime of that object, MyClass is. You also shouldn't have a smart pointer pointing to an object that was not dynamically allocated (which the hash map isn't in this case).
Personally, I'd use something like the following:
class MyMapper
{
public:
MyMapper(HashMap<string, Id> &map)
: _map(map)
{
}
private:
HashMap<string, Id> &_map
};
Note that this will prevent MyMapper from having an assignment operator, and it can only work if it's acceptable to pass the HashMap in the constructor; if that is a problem, I'd make the member a pointer (though I'd still pass the argument as a reference, and do _map(&map) in the initializer list).
If it's possible for MyMapper or any other class using the hash map to outlive MyClass, then you'd have to start thinking about smart pointers. In that case, I would probably recommend std::shared_ptr, but you'd have to use it everywhere: _hugeIdMap would have to be a shared_ptr to a dynamically allocated value, not a regular non-pointer field.
Update:
Since you said that using a reference is not acceptable due to the project's coding standards, I would suggest just sticking with a raw pointer for the reasons mentioned above.
Naked pointers (normally referred to as raw pointers) are just fine when the object has no responsibility to delete the object. In the case of MyMapper then the pointer points to an object already owned by MyClass and is therefore absolutely fine to not delete it. The problem arises when you use raw pointers when you do intend for objects to be deleted through them, which is where problems lie. People only ask questions when they have problems, which is why you almost always see it only used in a problematic context, but raw pointers in a non-owning context is fine.
How about passing it into the constructor and keeping a reference (or const-reference) to it? That way your intent of not owning the object is made clear.
Passing auto-pointers or shared-pointers are mostly for communicating ownership.
shared pointers indicate it's shared
auto-pointers indicate it's the receivers responsibility
references indicate it's the senders responsibility
blank pointers indicate nothing.
About your coding style:
our coding standards have a convention that says never pass non-const references.
Whether you use the C++ reference mechanism or the C++ pointer mechanism, you're passing a (English-meaning) reference to the internal storage that will change. I think your coding standard is trying to tell you not to do that at all, not so much that you can't use references to do so but that you can do it in another way.
This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Is it OK to use "delete this" to delete the current object?
Should objects delete themselves in C++?
I just came across this question on programmers.stackexchange and saw the question about doing a delete this; inside a member function.
From what I understand it is generally a no-no however there are some circumstances where this could be useful. When would something like that be useful and what are the technical reasons for not doing it?
Generally speaking, it's a bad idea as you're technically inside a member function when you do it and suddenly every member of that class is now invalid. Obviously if you do not touch anything after the delete this; call, you'll be okay. But it's very easy to forget these things, try and access a member variable and get undefined behavior and spend time at the debugger.
That said, it's used in things like Microsoft's Component Object Model (COM), when releasing a component (NOTE, this isn't exactly what they do as CashCow points out and is for illustrative purposes only):
void AddRef() { m_nRefs++; }
void Release()
{
m_nRefs--;
if(m_nRefs == 0)
delete this;
// class member-variables now deallocated, accessing them is undefined behaviour!
} // eo Release
That said, in C++ we have smart pointers (such as boost::shared_ptr) to manage the lifetimes of objects for us. Given that COM is inter-process and accessible from languages such as VB, smart pointers were not an option for the design team.
delete this; is commonly used in reference counting patterns. The object deletes itself when its reference count drops to zero. It is perfectly ok provided no further reference is made to the object being deleted. It also requires that the said object resides on the heap/free store.
I use it in my message handling. It is pre shared_ptr and it lets the message decide whether to delete itself (asynchronous) or unblock the sender (synchronous).