C++ after delete - c++

I am new to C++ and I was wondering why...
#include <iostream>
using namespace std;
class myClass{
public:
void myMethod(){
cout << "It works!" << endl;
}
myClass(){
cout << "myClass is constructed!" << endl;
}
~myClass(){
cout << "This class is destructed!" << endl;
}
};
int main()
{
myClass c;
c.myMethod();
myClass *e = &c;
delete e;
cout << "This is from main" << endl;
return 0;
}
So up there is the code. and the output is
myClass is constructed!
It works!
This class is destructed!
I am wondering where did the "This is from main" output go away.. does C++ doesn't execute codes after delete keyword?

You can only delete objects that have been created with new.
What you're doing is UB, by the means of double deletion.
By the way, while in this case your program stopped execution right after the statement that had UB, it doesn't necessarily have to happen this way because
However, if any such execution contains an undefined operation, this
International Standard places no requirement on the implementation
executing that program with that input (not even with regard to
operations preceding the first undefined operation).

You have undefined behavior. You are not allowed to delete something that was not allocated with new. In doing so you have undefined behavior and your program is allowed to do what it wants.
Most likely you should have received some sort of hard fault that stopped the program from running.

You caused undefined behavior by deleting something that was not newed. You don't need to delete stuff that just points at some general location. You only delete that which you created by calling new (not placement new!).

Perhaps it is good idea for you to read about stack and heap memory.
You must free only when you malloc (or other variations such as calloc).
For example:
char *c = (char *)malloc(255);
...
free(c)
You must delete only if you use new.
MyClass *e = new MyClass()
...
delete e
You must delete[] only if you use new[]
char *data = new char[20]
...
delete[] data
Now, if you do something like this:
...
{
int x;
x = 3;
}
x will be destroyed after the bracers because it is out of scope. This puts x on the stack. However, if you use malloc, new, or delete, the variable itself can be lost if you are not careful, but the memory is allocated. This is memory leak.
What you have is even more dangerous. You are deleting something which was not allocated. The behavior is not defined. With time and patience, a skilled hacker can study the behavior and may be able to break into your system and acquire same privileges as your program.

Related

Accessing pointers after deletion [duplicate]

This question already has answers here:
What happens to the pointer itself after delete? [duplicate]
(3 answers)
Closed 3 years ago.
I have a code snippet like below. I have created some Dynamic memory allocation for my Something class and then deleted them.
The code print wrong data which I expect but why ->show does not crash?
In what case/how ->show will cause crash?
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
I am trying to understand why after delete which frees up the memory location to be written with something else still have information about ->show!
#include <iostream>
#include <vector>
class Something
{
public:
Something(int i) : i(i)
{
std::cout << "+" << i << std::endl;
}
~Something()
{
std::cout << "~" << i << std::endl;
}
void show()
{
std::cout << i << std::endl;
}
private:
int i;
};
int main()
{
std::vector<Something *> somethings;
Something *i = new Something(1);
Something *ii = new Something(2);
Something *iii = new Something(3);
somethings.push_back(i);
somethings.push_back(ii);
somethings.push_back(iii);
delete i;
delete ii;
delete iii;
std::vector<Something *>::iterator n;
for(n = somethings.begin(); n != somethings.end(); ++n)
{
(*n)->show(); // In what case this line would crash?
}
return 0;
}
The code print wrong data which I expect but why ->show does not crash?
Why do you simultaneously expect the data to be wrong, but also that it would crash?
The behaviour of indirecting through an invalid pointer is undefined. It is not reasonable to expect the data to be correct, nor to expect the data to be wrong, nor to expect that it should crash, nor to expect that it shouldn't crash - in particular.
In what case/how ->show will cause crash?
There is no situation where the C++ language specifies the program to crash. Crashing is a detail of the particular implementation of C++.
For example, a Linux system will typically force the process to crash due to "segmentation fault" if you attempt to write into a memory area that is marked read-only, or attempt to access an unmapped area of memory.
There is no direct way in standard C++ to create memory mappings: The language implementation takes care of mapping the memory for objects that you create.
Here is an example of a program that demonstrably crashes on a particular system:
int main() {
int* i = nullptr;
*i = 42;
}
But C++ does not guarantee that it crashes.
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
The behaviour is undefined. Anything is possible as far as the language is concerned.
Remember, a pointer stores an integer memory address. On a call to delete, the dynamic memory will be deallocated but the pointer will still store the memory address. If we nulled the pointer, then program would crash.
See this question: What happens to the pointer itself after delete?

how to handle double free crash in c++

Deleting the double pointer is will cause the harmful effect like crash the program and programmer should try to avoid this as its not allowed.
But sometime if anybody doing this then i how do we take care of this.
As delete in C++ is noexcept operator and it'll not throw any exceptions. And its written type is also void. so how do we catch this kind of exceptions.
Below is the code snippet
class myException: public std::runtime_error
{
public:
myException(std::string const& msg):
std::runtime_error(msg)
{
cout<<"inside class \n";
}
};
void main()
{
int* set = new int[100];
cout <<"memory allcated \n";
//use set[]
delete [] set;
cout <<"After delete first \n";
try{
delete [] set;
throw myException("Error while deleting data \n");
}
catch(std::exception &e)
{
cout<<"exception \n";
}
catch(...)
{
cout<<"generic catch \n";
}
cout <<"After delete second \n";
In this case i tried to catch the exception but no success.
Pleas provide your input how we'll take care of these type of scenario.
thanks in advance!!!
Given that the behaviour on a subsequent delete[] is undefined, there's nothing you can do, aside from writing
set = nullptr;
immediately after the first delete[]. This exploits the fact that a deletion of a nullptr is a no-op.
But really, that just encourages programmers to be sloppy.
Segmentation fault or bad memory access or bus errors cannot be caught by exception. Programmers need to manage their own memory correctly as you do not have garbage collection in C/C++.
But you are using C++, no ? Why not make use of RAII ?
Here is what you should strive to do:
Memory ownership - Explicitly via making use of std::unique_ptr or std::shared_ptr and family.
No explicit raw calls to new or delete. Make use of make_unique or make_shared or allocate_shared.
Make use of containers like std::vector or std::array instead of creating dynamic arrays or allocating array on stack resp.
Run your code via valgrind (Memcheck) to make sure there are no memory related issues in your code.
If you are using shared_ptr, you can use a weak_ptr to get access to the underlying pointer without incrementing the reference count. In this case, if the underlying pointer is already deleted, bad_weak_ptr exception gets thrown. This is the only scenario I know of when an exception will be thrown for you to catch when accessing a deleted pointer.
A code must undergo multiple level of testing iterations maybe with different sets of tools before committing.
There is a very important concept in c++ called RAII (Resource Acquisition Is Initialisation).
This concept encapsulates the idea that no object may exist unless it is fully serviceable and internally consistent, and that deleting the object will release any resources it was holding.
For this reason, when allocating memory we use smart pointers:
#include <memory>
#include <iostream>
#include <algorithm>
#include <iterator>
int main()
{
using namespace std;
// allocate an array into a smart pointer
auto set = std::make_unique<int[]>(100);
cout <<"memory allocated \n";
//use set[]
for (int i = 0 ; i < 100 ; ++i) {
set[i] = i * 2;
}
std::copy(&set[0], &set[100] , std::ostream_iterator<int>(cout, ", "));
cout << std::endl;
// delete the set
set.reset();
cout <<"After delete first \n";
// delete the set again
set.reset();
cout <<"After delete second \n";
// set also deleted here through RAII
}
I'm adding another answer here because previous answers focus very strongly on manually managing that memory, while the correct answer is to avoid having to deal with that in the first place.
void main() {
std::vector<int> set (100);
cout << "memory allocated\n";
//use set
}
This is it. This is enough. This gives you 100 integers to use as you like. They will be freed automatically when control flow leaves the function, whether through an exception, or a return, or by falling off the end of the function. There is no double delete; there isn't even a single delete, which is as it should be.
Also, I'm horrified to see suggestions in other answers for using signals to hide the effects of what is a broken program. If someone is enough of a beginner to not understand this rather basic stuff, PLEASE don't send them down that path.

Calling delete on automatic objects

Consider the following c++ code:
class test
{
public:
int val;
test():val(0){}
~test()
{
cout << "Destructor called\n";
}
};
int main()
{
test obj;
test *ptr = &obj;
delete ptr;
cout << obj.val << endl;
return 0;
}
I know delete should be called only on dynamically allocated objects but what would happen to obj now ?
Ok I get that we are not supposed to do such a thing, now if i am writing the following implementation of a smart pointer, how can i make sure that such a thing does't happen.
class smart_ptr
{
public:
int *ref;
int *cnt;
smart_ptr(int *ptr)
{
ref = ptr;
cnt = new int(1);
}
smart_ptr& operator=(smart_ptr &smptr)
{
if(this != &smptr)
{
// House keeping
(*cnt)--;
if(*cnt == 0)
{
delete ref;
delete cnt;
ref = 0;
cnt = 0;
}
// Now update
ref = smptr.ref;
cnt = smptr.cnt;
(*cnt)++;
}
return *this;
}
~smart_ptr()
{
(*cnt)--;
if(*cnt == 0)
{
delete ref;
delete cnt;
ref = 0;
cnt = 0;
}
}
};
You've asked two distinct questions in your post. I'll answer them separately.
but what would happen to obj now ?
The behavior of your program is undefined. The C++ standard makes no comment on what happens to obj now. In fact, the standard makes no comment what your program does before the error, either. It simply is not defined.
Perhaps your compiler vendor makes a commitment to what happens, perhaps you can examine the assembly and predict what will happen, but C++, per se, does not define what happens.
Practially speaking1, you will likely get a warning message from your standard library, or you will get a seg fault, or both.
1: Assuming that you are running in either Windows or a UNIX-like system with an MMU. Other rules apply to other compilers and OSes.
how can i make sure that [deleteing a stack variable] doesn't happen.
Never initialize smart_ptr with the address of a stack variable. One way to do that is to document the interface to smart_ptr. Another way is to redefine the interface so that the user never passes a pointer to smart_ptr; make smart_ptr responsible for invoking new.
Your code has undefined behaviour because you used delete on a pointer that was not allocated with new. This means anything could happen and it's impossible to say what would happen to obj.
I would guess that on most platforms your code would crash.
Delete's trying to get access to obj space in memory, but opperation system don't allow to do this and throws (core dumped) exception.
It's undefined what will happen so you can't say much. The best you can do is speculate for particular implementations/compilers.
It's not just undefined behavior, like stated in other answers. This will almost certainly crash.
The first issue is with attempting to free a stack variable.
The second issue will occur upon program termination, when test destructor will be called for obj.

After using 'delete this' in a member function I am able to access other member functions. Why?

I just wrote a sample program to see the behaviour of delete this
class A
{
~A() {cout << "In destructor \n ";}
public:
int a;
A() {cout << "In constructor \n ";}
void fun()
{
cout << "In fun \n";
delete this;
cout << this->a << "\n"; // output is 0
this->fun_2(); // how m able to call fun_2, if delete this is called first ??
}
void fun_2()
{
cout << "In fun_2 \n";
}
main()
{
A *ptr = new A;
ptr->a = 100;
ptr->fun(); //delete this will be executed here
ptr->fun_2(); //how m able to execute fun_2 when this pointer is deleted ??
cout<< ptr->a << "\n"; //prints 0
return 0;
}
> Output
In constructor
In fun
In destructor
0
In fun 2
In fun 2
0
Questions
After executing delete this in fun(), how I am able to access func_2() with this pointer in fun() ??
Now in main how I am able to do obj->fun_2 as this pointer is deleted ??
If I am able to access function members after killing this object, then why data members are comming zero '0' ??
m using linux ubuntu and g++ compiler
What you see is undefined behavior: it might work, or it may crash.
The pointer is pointing to a deleted instance - it is a dangling pointer.
This is an undefined behavior as well: you may see a zero or a garbage value.
Eric Lippert provided a very nice "book in a table drawer of a hotel room" analogy in his answer to a question about pointers to local variables after the function has returned, it is equally applicable here.
Let's compare that to a similar szenario:
A *a= new A();
func(a);
delete a;
func2(a);
In my sample the compiler just passes the pointer a to func and func2, it does not care if it is pointing to a valid object. So if you call func2(a) and func2 dereferences the pointer, this is undefined behaviour. The program might crash if the memory was released back to the operating system and the program cannot access *a anymore. Normally delete keeps the memory allocated, does not pass it back to the operating system, so accessing *a after delete will not give an exception, but just return any value. This might be the previous values of *a, but it might also be any other value depending on the implementation of delete and depending on other calls to new done between delete a and *a. MSVC for example sets the memory to a predefined pattern in debug mode so you can easily spot when accessing freed memory.
Why is this related to your question? Because the compilers passes this as a hidden, implicit parameter.

Can someone explain exactly what happens if an exception is thrown during the process of allocating an array of objects on the heap?

I defined a class foo as follows:
class foo {
private:
static int objcnt;
public:
foo() {
if(objcnt==8)
throw outOfMemory("No more space!");
else
objcnt++;
}
class outOfMemory {
public:
outOfMemory(char* msg) { cout << msg << endl;}
};
~foo() { cout << "Deleting foo." << endl; objcnt--;}
};
int foo::objcnt = 0;
And here's the main function:
int main() {
try {
foo* p = new foo[3];
cout << "p in try " << p << endl;
foo* q = new foo[7];
}catch(foo::outOfMemory& o) {
cout << "Out-of-memory Exception Caught." << endl;
}
}
It is obvious that the line "foo* q = new foo[7];" only creates 5 objects successfully, and on the 6th object an Out-of-memory exception is thrown. But it turns out that there's only 5 destructor calls, and destrcutor is not called for the array of 3 objects stored at the position p points to. So I am wondering why? How come the program only calls the destructor for those 5 objects?
The "atomic" C++ allocation and construction functions are correct and exception-safe: If new T; throws, nothing leaks, and if new T[N] throws anywhere along the way, everything that's already been constructed is destroyed. So nothing to worry there.
Now a digression:
What you always must worry about is using more than one new expression in any single unit of responsibility. Basically, you have to consider any new expression as a hot potato that needs to be absorbed by a fully-constructed, responsible guardian object.
Consider new and new[] strictly as library building blocks: You will never use them in high-level user code (perhaps with the exception of a single new in a constructor), and only inside library classes.
To wit:
// BAD:
A * p = new A;
B * q = new B; // Ouch -- *p may leak if this throws!
// Good:
std::unique_ptr<A> p(new A);
std::unique_ptr<B> q(new B); // who cares if this throws
std::unique_ptr<C[3]> r(new C[3]); // ditto
As another aside: The standard library containers implement a similar behaviour: If you say resize(N) (growing), and an exception occurs during any of the constructions, then all of the already-constructed elements are destroyed. That is, resize(N) either grows the container to the specified size or not at all. (E.g. in GCC 4.6, see the implementation of _M_fill_insert() in bits/vector.tcc for a library version of exception-checked range construction.)
Destructors are only called for the fully constructed objects - those are objects whose constructors completed normally. That only happens automatically if an exception is thrown while new[] is in progress. So in your example the destructors will be run for five objects fully constructed during q = new foo[7] running.
Since new[] for the array that p points to completed successfully that array is now handled to your code and the C++ runtime doesn't care of it anymore - no destructors will be run unless you do delete[] p.
You get the behavior you expect when you declare the arrays on the heap:
int main()
{
try
{
foo p[3];
cout << "p in try " << p << endl;
foo q[7];
}
catch(foo::outOfMemory& o)
{
cout << "Out-of-memory Exception Caught." << endl;
}
}
In your code only the pointers were local automatic variables. Pointers don't have any associated cleanup when the stack is unwound. As others have pointed out this is why you generally do not have RAW pointers in C++ code they are usually wrapped inside a class object that uses the constructor/destructor to control their lifespan (smart pointer/container).
As a side note. It is usually better to use std::vector than raw arrays (In C++11 std::array is also useful if you have a fixed size array). This is because the stack has a limited size and these object puts the bulk of the data in the heap. The extra methods provided by these class objects make them much nicer to handle in the rest of your code and if you absolutely must have an old style array pointer to pass to a C function they are easy to obtain.
int main()
{
try
{
std::vector<foo> p(3);
cout << "p in try " << p << endl;
std::vector<foo> q(7);
// Now you can pass p/q to function much easier.
}
catch(foo::outOfMemory& o)
{
cout << "Out-of-memory Exception Caught." << endl;
}
}