Test, if object was deleted - c++

Look to the following code, please:
class Node
{
private:
double x, y;
public:
Node (double xx, double yy): x(xx), y(yy){}
};
int main()
{
Node *n1 = new Node(1,1);
Node *n2 = n1;
delete n2;
n2 = NULL;
if (n1 != NULL) //Bad test
{
delete n1; //throw an exception
}
}
There are two pointers n1, n2 pointed to the same object. I would like to detect whether n2 was deleted using n1 pointer test. But this test results in exception.
Is there any way how to determine whether the object was deleted (or was not deleted) using n1 pointer?

As far as I know the typical way to deal with this situation is to use reference-counted pointers, the way (for example) COM does. In Boost, there's the shared_ptr template class that could help (http://www.boost.org/doc/libs/1_42_0/libs/smart_ptr/shared_ptr.htm).

No. Nothing in your code has a way of reaching the n1 pointer and changing it when the pointed-to object's is destroyed.
For that to work, the Node would have to (for instance) maintain a list of all pointers to it, and you would have to manually register (i.e. call a method) every time you copy the pointer value. It would be quite painful to work with.

When you have an object it will be at some place in memory. This is the value for both n1 and n2. When you delete the object, by releasing the memory that object used, the memory is invalid. So you can never access anything n1 points to, if it was deleted.
I suggest creating a wrapper object, which contains a counter and a pointer to the object. When you want to point to the actual object, instead you have to point to the wrapper, and when you want to delete the object, you actually call a method on the wrapper:
If you want to point to the object, you should increase the counter of the wrapper, and point to the wrapper. If you want to delete the object, you should decrease the counter and set the pointer to the wrapper to null. If the counter of the wrapper reaches zero, you can safely delete the actual object and then the wrapper.

Related

Smart pointers to an object explicitly created object

I have read a lot of issues created in regard to this but was not able to answer my question.
I have created a class as follows -
class exampleClass{
public:
exampleClass(int n){
cout<<"Created Class"<<endl;
this->number = n;
}
~exampleClass(){
cout<<endl<<"This class is destroyed Now"<<endl;
}
template<typename t> t
addNum(t a, t b){
return a + b;
}
void print(){
cout<<this->number<<endl;
}
private:
int number;
};
and I make 2 shared_ptr(or for that matter unique_ptr, error is same) as follows -
int main(){
exampleClass* object = new exampleClass(60);
std::shared_ptr<exampleClass> p1(object);
std::shared_ptr<exampleClass> p2 (object);
p1->print();
}
Now the error it throws at the end is -
free(): double free detected in tcache 2
Aborted (core dumped)
I am not able to understand why the error at the end. Shouldn't the above code be equal to p2 =p1(in case of shared_ptr) or p2 = std::move(p1) for unique_ptr as both the pointers are for the same object?
TIA
PS - The title might be a little misleading or not accurate,but I did not know what exactly should be a title.
When you create a smart pointer, it will take ownership of the pointer, and deletes it when it goes out of scope (or when the last reference is done in case of a shared pointer).
When you create two smart pointers from the same raw pointer they both will delete the pointer at the end of their life, because they don't know about each other.
int main()
{
// Create a shared pointer with a new object
std::shared_ptr<exampleClass> p1 = std::make_shared<exampleClass>(60);
// Now you can safely create a second pointer from your existing one.
std::shared_ptr<exampleClass> p2 = p1;
p1->print();
p2->print();
}
When you create a shared_ptr from a raw pointer, it takes ownership of the raw pointer, and when the smart pointer goes out of scope, it will call delete on the owned resource. Giving the same raw pointer to 2 different shared_ptrs causes a double free, as both of them will try to free the resource.
If you need 2 shared_ptrs that share the same resource, you can copy the first one:
int main(){
exampleClass* object = new exampleClass(60);
std::shared_ptr<exampleClass> p1(object);
std::shared_ptr<exampleClass> p2 = p1;
}
This way they share ownership of the resource (sharaed_ptrs have an internal reference counter that tracks how many of them own a resource. When you copy a shared_ptr, the reference counter is incremented. When one goes out of scope, the reference counter is decremented. Only if the counter reaches zero is the resource freed) thus it will only be freed once, when the last shared_ptr owning the resource goes out of scope.
It's usually preferable to avoid explicitly writing out new and use make_shared, which does the allocation, creates the object for you and returns a shared_ptr that owns it:
auto object = std::make_shared<exampleClass>(60);
Some additional advanced reading in the topic, not strictly related to the question:
performance differences when using make_shared vs manually calling new: here
memory implications of using make_shared with weak_ptrs and large objects: here (Thanks for bringig this up #Yakk - Adam Nevraumont, this was new for me :))

Why does the actual node have to be a pointer?

So when we actually create the node object, why does it have to be a pointer. Why cant be just make it a regular Node object, use . for the data and then use the arrow operator for next node.
class Node {
int data;
Node * next;
};
int main() {
Node * node1, node2; // why make it a pointer
Node node3, node4; // Why dont people leave it has non pointer,
because you already have Node* next in the class.
}
Pointers play well with dynamic allocation. You can continue using the object through pointers until you purposefully free it. Automatic objects on the other hand, get destroyed at the end of the scope leaving all pointers (and references!1) to them dangling.
If you know that you won't use any of the pointers outside the scope that contains the object itself (this is likely true for your example in main()), then go right ahead and use an automatic object.
But one final complication is that if one of the pointers is a "smart" pointer that knows how to free the object it points to, you have to create the attached object using the matching allocation function. Attempting to free an automatic (scoped)2 object will result only in misery.
1 The oft-repeated statements like "a reference is equivalent to the object itself" and "a well-defined program cannot create an invalid reference" are horribly wrong but that's too long a discussion to have here.
2 Misery also accompanies trying to free an object which is static or a member subobject or dynamically allocated using a different allocator.
You can do this for some educational purpose.
class Node {
public:
int data;
Node * next;
Node(int d, Node *nextNode = NULL) :data(d), next(nextNode) {}
Node() {}
};
int main() {
Node node3, node4(20);
node3.data = 10;
node3.next = &node4;
std::cout << node3.data << " " << node3.next->data << std::endl;
}
But in real world, in creation of linked linked, you don't know when you have to create a list. In your case its a small class, but consider if it a big class with several complex data structure. If you go in this way, unnecessarily you will create a object on stack.
It will also create ambiguity while deleting memory, you will have to take a special care. In fact, you will not be able to remove memory for your first node until execution goes out of memory.
now here is one big issue, If you are passing this list to some other function(may be creating a list in a function and returning it to caller), this first Node will go out of scope and the memory allocated to your list will be leaked.
Hope I cleared your doubt.
Try to change it into a Node object and compile the code. There should be a compilation error saying that it is an incomplete type. This is because the next field is creating an infinite loop and the compiler cannot calculate how big that field is supposed to be.
A pointer is a reference to a place in memory, since each node should reference the other node and not store its value directly, because if it would store a value you need to update it each time the other node is updated.
One can also use a pointer to allocate memory in the "heap" with the "new" keyword(dynamic storage), otherwise it could be stored in the "stack"(automatic storage), which has a very limited memory(you can get a stack overflow).

What all should be deleted in the destructor of a class

So it's been a while since I've done any c++ coding and I was just wondering which variables in a basic linked list should be deleted in the destructor and unfortunately I can't consult my c++ handbook at the moment regarding the matter. The linked list class looks as follows:
#include <string>
#include <vector>
class Node
{
Node *next;
string sName;
vector<char> cvStuff;
Node(string _s, int _i)
{
next = nullptr;
sName = _s;
for (int i = 0; i < _i; i++)
{
cvStuff.insert(cvStuff.end(), '_');
}
}
~Node()
{
//since sName is assigned during runtime do I delete?
//same for cvStuff?
}
};
I'm also curious, if in the destructor I call
delete next;
will that go to the next node of the linked list and delete that node and thus kind of recursively delete the entire list from that point? Also, if that is the case and I choose for some reason to implement that, would I have to check if next is nullptr before deleting it or would it not make a difference?
Thank you.
Ideally, nothing: use smart pointers: std::unique_ptr<>, std::smart_ptr<>, boost::scoped_ptr<>, etc..
Otherwise, you delete what's an owning native pointer. Is next owning?
Do you plan to delete something in the middle of the list? If yes, you can't delete in the destructor.
Do you plan to share tails? If yes, you need reference-counted smart pointers.
It's okay to delete nullptr (does nothing). In the example, you shouldn't delete sName and cvStuff as those are scoped, thus destroyed automatically.
Also, if this is going to be a list that can grow large, you might want to destroy & deallocate *next manually. This is because you don't want to run out of stack space by recursion.
Furthermore, I suggest separating this to List, meaning the data structure and ListNode, meaning an element. Your questions actually show this ambiguity, that you don't know whether you're deleting the ListNode or the List in the destructor. Separating them solves this.
An object with automatic lifetime has it's destructor called when it goes out of scope:
{ // scope
std::string s;
} // end scope -> s.~string()
A dynamic object (allocated with new) does not have it's destructor called unless delete is called on it.
For a member variable, the scope is the lifetime of the object.
struct S {
std::string str_;
char* p_;
};
int main() { // scope
{ // scope
S s;
} // end scope -> s.~S() -> str_.~string()
}
Note in the above that nothing special happens to p_: it's a pointer which is a simple scalar type, so the code does nothing automatic to it.
So in your list class the only thing you have to worry about is your next member: you need to decide whether it is an "owning" pointer or not. If it is an "owning" pointer then you must call delete on the object in your destructor.
Alternatively, you can leverage 'RAII' (resource aquisition is initialization) and use an object to wrap the pointer and provide a destructor that will invoke delete for you:
{ // scope
std::unique_ptr<Node> ptr = std::make_unique<Node>(args);
} // end scope -> ptr.~unique_ptr() -> delete -> ~Node()
unique_ptr is a purely owning pointer, the other alternative might be shared_ptr which uses ref-counting so that the underlying object is only deleted when you don't have any remaining shared_ptrs to the object.
You would consider your next pointer to be a non-owning pointer if, say, you have kept the actual addresses of the Nodes somewhere else:
std::vector<Node> nodes;
populate(nodes);
list.insert(&nodes[0]);
list.insert(&nodes[1]);
// ...
in the above case the vector owns the nodes and you definitely should not be calling delete in the Node destructor, because the Nodes aren't yours to delete.
list.insert(new Node(0));
list.insert(new Node(1));
here, the list/Nodes are the only things that have pointers to the nodes, so in this use case we need Node::~Node to call delete or we have a leak.
Basically you should just
delete next
and that's all you should do:
The string and vector objects have their own destructors, and since this object is being destructed, theirs will be called.
Deleting a null pointer is not a problem, so you don't even have to check for that.
If next is not a null pointer, it will keep on calling the destructors of the next nodes, on and on, as needed.
Just delete it and that's all, then.
Your class destructor will be called (it is empty), then the member objects destructors are called.
If member is not an object, no destructor is called.
In your example:
- List *next: pointer on List: no destructor called
- string sName: string object: destructor called
- vector<char> cvStuff: vector object: destructor called
Good news: you have nothing to do. Destructor declaration is not even useful here.
If you delete next in your destructor, then deleting an item will delete all other items of your list: not very useful.
(and your List object should rather be called Node). The List is the chained result of all your nodes, held by the first node you created.

Difference between setting a node equal to NULL vs. deleting a Node

Lets say I have a node struct defined as below :
struct Node
{
int data;
Node* left;
Node* right;
}
lets say i have a node Node abc and xyz and :
abc->data = 1;
abc->right=NULL;
abc->left=xyz;
xyz->data =2;
xyz->right=NULL;
xyz->left=NULL;
Later if i want to delete the node xyz, is it the same if i say:
delete xyz
vs. saying:
xyz=NULL;
Could someone explain the difference or point me in the right direction ?
No, it is not the same. delete X; statement actually calls a destructor of the object pointed by X and releases/frees the memory previously allocated for that object by operator new.
The X = NULL; statement simply assigns addres 0x0 to the pointer X and neither destroys the object pointed by X nor releases the memory as opposed to delete.
delete frees the memory, but does not clear the pointer.
setting the xyz to NULL just clear the pointer, but does not free the memory.
This is one of the many differences between C++ and Java/C#/JavaScript in its memory management -- in systems with Garbage collection the clearing of a reference/pointer such as xyz above will allow the garbage collector to later free the memory. C++ (or C) does not have garbage collection which is why memory must be managed as part of the program or otherwise you will end up with memory leaks.
As Vlad Lazarenko wrote above, these operations are not the same. In real code you should use smart pointers and do not call delete operator directly.
boost::shared_ptr<std::string> x = boost::make_shared<std::string>("hello, world!");
x->size(); // you can call methods of std::string through smart pointer
x.get(); // and you always can get raw pointer to std::string
Modern compilers allows you to write less code:
auto x = std::make_shared<std::string>("hello, world!");

Deleting a dynamic array of pointers in C++

I have a question concerning deleting a dynamic array of pointers in C++. Let's imagine that we have a following situation:
int n;
scanf("%d", &n);
Node **array1 = new Node*[n];
/* ... */
where Node is a certain structure defined beforehand. Suppose that after allocation with the new operator, we change the content of the array1 (but we do not delete anything!). What is the proper way to delete the array1 and all its content if there is a possibility of repeated pointers in the array (without sorting them or inserting into the set, in linear time)?
Using this allocation:
Node **array1 = new Node*[n];
The contents of array1 are undefined. Each element is a Node*, and because the memory is uninitialized, the value could be anything.
Allocating an array of pointers does not construct objects of the pointed-to class.
So whatever pointers you put into the array, the objects they point to need to be constructed and destructed elsewhere.
So to answer your question, the proper way to delete array1 is
delete[] array1;
However, note that this will not result in destructors being called for each Node* - you should deal with whatever you put into the array before you delete the array.
EDIT:
I was confused by the original question, which mentioned "change the value" in the array, as if there was a valid value in the array as allocated in your example.
BUT... now that I understand you want to keep track of the pointers for deletion later, perhaps you can just create another array for that purpose where each pointer exists only once. So you have the array you currently have above, which contains pointers to nodes that might be repeated, for whatever purpose you're using it. Then you have another array for the express purpose of managing the deletion, where each pointer occurs only once. It should be easy enough to set something like nodeCleanupArray[i] = pNewNode right after pNewNode = new Node(), then you can blast through that array in linear time and delete each element. (Which means you wouldn't bother inspecting the elements in array1, you'd rely on nodeCleanupArray for the cleanup)
There are MANY solutions to this sort of problem, but the most obvious choice would be to change it to use
std::vector< std::shared_ptr<Node> >
Now you will have a reference counted pointer without writing any code, and an "array" that doesn't need to know it's predefined size.
You can of course implement a reference counted object within Node, or your own container object to do the same thing, but that seems like extra hassle for little or no benefit.
Try mark and sweep :) You are trying to implement a managed environment.
Here is an example:
struct Node
{
...
bool marked;
Node() : marked(false)
{}
};
Now delete:
void do_delete(Node **data, size_t n)
{
size_t uniq = 0;
Node **temp = new Node*[n];
for (size_t i = 0; i < n; i++)
{
if (data[i] && !data[i]->marked)
{
data[i]->marked = true;
temp[uniq++] = data[i];
}
}
for (i = 0; i < uniq; ++i)
{
delete temp[i];
}
delete[] temp;
delete[] data;
}
The way I'd do this is have a reference counters and have a Node::deref method that would delete the node itself when reference count is 0. When iterating through the list of nodes, calling node->deref will not actually delete the object until the last Node reference in the array.
What is the proper way to delete the array1 and all its content
You show a single allocation; new Node*[n]. This allocation confers on the program the responsibility to call delete [] whatever_the_return_value_was. This is only about deleting that one allocation and not about deleting 'all its content'. If your program performs other allocations then the program needs to arrange for those responsibilities to be handled as well.
if there is a possibility of repeated pointers in the array
Well it would be undefined behavior to delete a pointer value that is not associated with any current allocation, so you have to avoid deleting the same pointer value more than once. This is not an issue of there being a single correct way, but an issue of programming practices, design, etc.
Typically C++ uses RAII to handle this stuff automatically instead of trying to do what you want by hand, because doing it by hand is really hard to get right. One way to use RAII here would be to have a second object that 'owns' the Nodes. Your array1 would then simply use raw pointers as 'non-owning' pointers. Deleting all the Nodes then would be done by letting the Node owning object go out of scope or otherwise be destroyed.
{
// object that handles the ownership of Node objects.
std::vector<std::unique_ptr<Node>> node_owner;
// your array1 object that may hold repeated pointer values.
std::vector<Node*> array1;
node_owner.emplace_back(new Node); // create new nodes
array1.push_back(node_owner.back().get()); // put nodes in the array
array1.push_back(node_owner.back().get()); // with possible duplicates
// array1 gets destroyed, but it's contents do not, so the repeated pointers don't matter
// node_owner gets destroyed and destroys all its Nodes. There are no duplicates to cause problems.
}
And the destruction does occur in linear time.