This question already has answers here:
What happens to the pointer itself after delete? [duplicate]
(3 answers)
Closed 3 years ago.
I have a code snippet like below. I have created some Dynamic memory allocation for my Something class and then deleted them.
The code print wrong data which I expect but why ->show does not crash?
In what case/how ->show will cause crash?
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
I am trying to understand why after delete which frees up the memory location to be written with something else still have information about ->show!
#include <iostream>
#include <vector>
class Something
{
public:
Something(int i) : i(i)
{
std::cout << "+" << i << std::endl;
}
~Something()
{
std::cout << "~" << i << std::endl;
}
void show()
{
std::cout << i << std::endl;
}
private:
int i;
};
int main()
{
std::vector<Something *> somethings;
Something *i = new Something(1);
Something *ii = new Something(2);
Something *iii = new Something(3);
somethings.push_back(i);
somethings.push_back(ii);
somethings.push_back(iii);
delete i;
delete ii;
delete iii;
std::vector<Something *>::iterator n;
for(n = somethings.begin(); n != somethings.end(); ++n)
{
(*n)->show(); // In what case this line would crash?
}
return 0;
}
The code print wrong data which I expect but why ->show does not crash?
Why do you simultaneously expect the data to be wrong, but also that it would crash?
The behaviour of indirecting through an invalid pointer is undefined. It is not reasonable to expect the data to be correct, nor to expect the data to be wrong, nor to expect that it should crash, nor to expect that it shouldn't crash - in particular.
In what case/how ->show will cause crash?
There is no situation where the C++ language specifies the program to crash. Crashing is a detail of the particular implementation of C++.
For example, a Linux system will typically force the process to crash due to "segmentation fault" if you attempt to write into a memory area that is marked read-only, or attempt to access an unmapped area of memory.
There is no direct way in standard C++ to create memory mappings: The language implementation takes care of mapping the memory for objects that you create.
Here is an example of a program that demonstrably crashes on a particular system:
int main() {
int* i = nullptr;
*i = 42;
}
But C++ does not guarantee that it crashes.
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
The behaviour is undefined. Anything is possible as far as the language is concerned.
Remember, a pointer stores an integer memory address. On a call to delete, the dynamic memory will be deallocated but the pointer will still store the memory address. If we nulled the pointer, then program would crash.
See this question: What happens to the pointer itself after delete?
Related
Here I have a class definition. It is a little long, but the focus will be on the move constructor and the destructor. Below the class definition is a short test.
#include <cassert>
#include <iostream>
#include <utility>
template <typename T>
class SharedPtr {
public:
SharedPtr() {}
explicit SharedPtr(T* input_pointer) : raw_ptr_(input_pointer), ref_count_(new size_t(1)) {}
SharedPtr(const SharedPtr& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {
if (ref_count_) {
++*ref_count_;
}
}
SharedPtr(SharedPtr&& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {}
SharedPtr& operator=(SharedPtr other) {
swap(other, *this);
return *this;
}
size_t use_count() const {
return ref_count_ ? *ref_count_ : 0;
}
~SharedPtr() {
if (ref_count_) {
--*ref_count_;
if (*ref_count_ == 0) {
delete raw_ptr_;
delete ref_count_;
}
}
}
private:
T* raw_ptr_ = nullptr;
size_t* ref_count_ = nullptr;
friend void swap(SharedPtr<T>& left, SharedPtr<T>& right) {
std::swap(left.raw_ptr_, right.raw_ptr_);
std::swap(left.ref_count_, right.ref_count_);
}
};
int main() {
// Pointer constructor
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
}
std::cout << "All tests passed." << std::endl;
return 0;
}
If I run the code I get an error message indicating memory corruption:
*** Error in `./a.out': corrupted size vs. prev_size: 0x0000000001e3dc0f ***
======= Backtrace: =========
...
======= Memory map: ========
...
Aborted (core dumped)
We may suspect something is wrong with the move constructor: if we move from a SharedPtr and then later destruct that SharedPtr, it will still destruct as if it were an "active" SharedPtr. So we could fix that by setting the other object's pointers to nullptr in the move constructor.
But that's not the interesting thing about this code. The interesting thing is what happens if I don't do that, and instead simply add std::cout << "x" << std::endl; to the move constructor.
The new move constructor is given below, and the rest of the code is unchanged.
SharedPtr(SharedPtr&& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {
std::cout << "x" << std::endl;
}
The code now runs without error on my machine and yields the output:
x
All tests passed.
So my questions are:
Do you get the same results as I do?
Why does adding a seemingly innocuous std::cout line cause the program to run "successfully"?
Please note: I am not under any sort of impression that error message gone implies bug gone.
bolov's answer explains the cause of the undefined behavior (UB), when the move constructor of SharedPtr does not invalidate the moved-from pointer.
I disagree with bolov's view that it is pointless to understand UB. The question why code changes result in different behavior, when facing UB, is extremely interesting. Knowing what happens can help debugging, on one hand, and it can help intruders intrude the system, on the other.
The difference in the code in question comes from adding std::cout << something. In fact, the following change also makes the crash go away:
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
std::cout << "hi\n"; // <-- added
}
The std::cout << allocates some internal buffer, which std::cout << uses. The allocation in cout happens only once, and the question is if this allocation happens before or after the double free. Without the additional std::cout, this allocation happens after the double free, when the heap is corrupted. When the heap is corrupted, the allocation in std::cout << triggers the crash. But when there is a std::cout << before the double-free, there is no allocation after the double-free.
Let's have few other experiments to validate this hypothesis:
Remove all std::cout << lines. All works fine.
Move two calls to new int(some number) right before the end:
int main() {
int *p2 = nullptr;
int *cnt = nullptr;
// Pointer constructor
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
}
p2 = new int(100);
cnt = new int(1); // <--- crash
return 0;
}
This crashes, since the new is attempted on a corrupted heap.
(you can try it out here)
Now move the two new lines to slightly up, right before the closing } of the inner block. In this case, the new is performed before the heap is corrupted, so nothing triggers a crash. The delete simply puts the data in the free list, which is not corrupted. As long as the corrupted heap is not touched, then things will work fine. One can call new int, and get a pointer of one of the lately released pointers, and nothing bad will happen.
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
p2 = new int(100);
cnt = new int(1);
}
delete p2;
delete cnt;
p2 = new int(100); // No crash. We are reusing one of the released blocks
cnt = new int(1);
(you can try it out here)
The interesting fact is that the corrupted heap can be undetected to much later in the code. The computer may run millions of unrelated lines of code, and suddenly crash on a completely unrelated new in a completely different part of the code. This is why sanitizers and the likes of valgrind are needed: debugging memory corruption can be practically impossible to debug otherwise.
Now, the really interesting question is "can this be exploited more than for denial of service?". Yes it can. It depends on the kind of object that is destroyed twice, and what it does in the destructor. It also depends on what happens between the first destruction of the pointer, and its second free. In this trivial example, nothing substantial seems to be possible.
SharedPtr(SharedPtr&& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {}
When you move the moved from object remains the same. This means that at some point in your program you will delete raw_ptr_ twice for the same memory. The same for ref_count_. This is Undefined Behavior.
The behaviour you observe falls well within Undefined Behavior because that's what UB means: the standard doesn't mandate absolutely any kind of behavior from your program. Trying to understand why exactly happens what happens on your particular compiler and your particular version on your particular platform with your specific flags is ... kind of pointless.
I am new to C++ and I was wondering why...
#include <iostream>
using namespace std;
class myClass{
public:
void myMethod(){
cout << "It works!" << endl;
}
myClass(){
cout << "myClass is constructed!" << endl;
}
~myClass(){
cout << "This class is destructed!" << endl;
}
};
int main()
{
myClass c;
c.myMethod();
myClass *e = &c;
delete e;
cout << "This is from main" << endl;
return 0;
}
So up there is the code. and the output is
myClass is constructed!
It works!
This class is destructed!
I am wondering where did the "This is from main" output go away.. does C++ doesn't execute codes after delete keyword?
You can only delete objects that have been created with new.
What you're doing is UB, by the means of double deletion.
By the way, while in this case your program stopped execution right after the statement that had UB, it doesn't necessarily have to happen this way because
However, if any such execution contains an undefined operation, this
International Standard places no requirement on the implementation
executing that program with that input (not even with regard to
operations preceding the first undefined operation).
You have undefined behavior. You are not allowed to delete something that was not allocated with new. In doing so you have undefined behavior and your program is allowed to do what it wants.
Most likely you should have received some sort of hard fault that stopped the program from running.
You caused undefined behavior by deleting something that was not newed. You don't need to delete stuff that just points at some general location. You only delete that which you created by calling new (not placement new!).
Perhaps it is good idea for you to read about stack and heap memory.
You must free only when you malloc (or other variations such as calloc).
For example:
char *c = (char *)malloc(255);
...
free(c)
You must delete only if you use new.
MyClass *e = new MyClass()
...
delete e
You must delete[] only if you use new[]
char *data = new char[20]
...
delete[] data
Now, if you do something like this:
...
{
int x;
x = 3;
}
x will be destroyed after the bracers because it is out of scope. This puts x on the stack. However, if you use malloc, new, or delete, the variable itself can be lost if you are not careful, but the memory is allocated. This is memory leak.
What you have is even more dangerous. You are deleting something which was not allocated. The behavior is not defined. With time and patience, a skilled hacker can study the behavior and may be able to break into your system and acquire same privileges as your program.
This question already has answers here:
Can a local variable's memory be accessed outside its scope?
(20 answers)
C++ delete - It deletes my objects but I can still access the data?
(13 answers)
Closed 7 years ago.
Out of fun, I decided to see what gdb would say about this code, which is meant to attempt to use methods of an already destroyed object.
#include <iostream>
class ToDestroy
{
public:
ToDestroy() { }
~ToDestroy() {
std::cout << "Destroyed!" << std::endl;
}
void print() {
std::cout << "Hello!" << std::endl;
}
};
class Good
{
public:
Good() { }
~Good() { }
void setD(ToDestroy* p) {
mD = p;
}
void useD() {
mD->print();
}
private:
ToDestroy* mD;
};
int main() {
Good g;
{
ToDestroy d;
g.setD(&d);
}
g.useD();
return 0;
}
The output is (built with -O0 flag):
Destroyed!
Hello!
Allocating d in the heap and deleting it causes the same behaviour (i.e., no crash).
I assume the memory has not been overwritten and C++ is 'tricked' into using it normally. However, I am surprised about the fact that, when allocating on the heap and deleting, one can use memory not assigned to them.
Can someone provide any more insight about this? Does this mean that when trying to dereference a pointer, if that memory happens to have something 'coherent' for our context the execution would not cause a SEGFAULT despite the memory not having been assigned to us?
A segfault happens when you try to access an address that the OS forbids you to access. This can be because the mem behind the address is not allocated to you process, or because it does not exist or whatever. So you are now trying to access a piece of memory that is still allocated to your process, so no segfault.
Malloc (the one that manages your heap) works with certain buffers to limit the amount of syscalls. So there is uninitialized mem that you can access.
You pass an invalid this pointer to print but it is never dereferenced as print is not virtual nor is it accessing any member.
This question already has answers here:
Is there a reason to call delete in C++ when a program is exiting anyway?
(8 answers)
Closed 8 years ago.
I have the following code:
class A {
public:
virtual void f() {
cout << "1" << endl;
}
};
class B : public A {
public:
void f {
cout << "2" << endl;
}
};
int main() {
A* a = new B();
a->f();
return 0;
}
And my question is: why there is no need to to delete a before return of the main function?
According to my understanding this code will result in a memory leak, am I wrong?
[UPDATE]
I checked the following code using valgrind and it confused me even more. It says there is a memory leak.
There is indeed a memory leak. It lasts from the return of main to the exit of the program, which in this case is very, very short.
"According to my understanding this code will result in a memory leak, am I wrong?"
No you're right, there should be a delete. Though the memory leak usually doesn't matter, since the OS will reclaim all memory allocated from the process after return 0;.
Consider the following c++ code:
class test
{
public:
int val;
test():val(0){}
~test()
{
cout << "Destructor called\n";
}
};
int main()
{
test obj;
test *ptr = &obj;
delete ptr;
cout << obj.val << endl;
return 0;
}
I know delete should be called only on dynamically allocated objects but what would happen to obj now ?
Ok I get that we are not supposed to do such a thing, now if i am writing the following implementation of a smart pointer, how can i make sure that such a thing does't happen.
class smart_ptr
{
public:
int *ref;
int *cnt;
smart_ptr(int *ptr)
{
ref = ptr;
cnt = new int(1);
}
smart_ptr& operator=(smart_ptr &smptr)
{
if(this != &smptr)
{
// House keeping
(*cnt)--;
if(*cnt == 0)
{
delete ref;
delete cnt;
ref = 0;
cnt = 0;
}
// Now update
ref = smptr.ref;
cnt = smptr.cnt;
(*cnt)++;
}
return *this;
}
~smart_ptr()
{
(*cnt)--;
if(*cnt == 0)
{
delete ref;
delete cnt;
ref = 0;
cnt = 0;
}
}
};
You've asked two distinct questions in your post. I'll answer them separately.
but what would happen to obj now ?
The behavior of your program is undefined. The C++ standard makes no comment on what happens to obj now. In fact, the standard makes no comment what your program does before the error, either. It simply is not defined.
Perhaps your compiler vendor makes a commitment to what happens, perhaps you can examine the assembly and predict what will happen, but C++, per se, does not define what happens.
Practially speaking1, you will likely get a warning message from your standard library, or you will get a seg fault, or both.
1: Assuming that you are running in either Windows or a UNIX-like system with an MMU. Other rules apply to other compilers and OSes.
how can i make sure that [deleteing a stack variable] doesn't happen.
Never initialize smart_ptr with the address of a stack variable. One way to do that is to document the interface to smart_ptr. Another way is to redefine the interface so that the user never passes a pointer to smart_ptr; make smart_ptr responsible for invoking new.
Your code has undefined behaviour because you used delete on a pointer that was not allocated with new. This means anything could happen and it's impossible to say what would happen to obj.
I would guess that on most platforms your code would crash.
Delete's trying to get access to obj space in memory, but opperation system don't allow to do this and throws (core dumped) exception.
It's undefined what will happen so you can't say much. The best you can do is speculate for particular implementations/compilers.
It's not just undefined behavior, like stated in other answers. This will almost certainly crash.
The first issue is with attempting to free a stack variable.
The second issue will occur upon program termination, when test destructor will be called for obj.