It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I don't really understand why are those pointer accessible ... any help appreciated
#include <iostream>
class Wicked{
public:
Wicked() {};
virtual ~Wicked() {};
int a;
int b;
};
class Test
{
public:
Test() {};
virtual ~Test() {};
int c;
Wicked * TestFunc()
{
Wicked * z;
c = 9;
z = new Wicked;
z->a = 1;
z->b = 5;
return z;
};
};
int main()
{
Wicked *z;
Test *t = new Test();
z = t->TestFunc();
delete z;
delete t;
// why can I set 'z' when pointer is already destroyed?
z->a = 10;
// why does z->a print 10?
std::cout << z->a << std::endl;
// why does t->c exist and print correct value?
std::cout << t->c << std::endl;
//------------------------------------------------------
int * p = new int;
*p = 4;
// this prints '4' as expected
std::cout << *p << std::endl;
delete p;
// this prints memory address as expected
std::cout << *p << std::endl;
return 0;
}
Deleting a pointer doesn't zero out any memory because to do so would take CPU cycles and that's not what C++ is about. What you have there is a dangling pointer, and potentially a subtle error. Code like this can sometimes work for years only to crash at some point in the future when some minor change is made somewhere else in the program.
This is a good reason why you should NULL out pointers when you've deleted the memory they point to, that way you'll get an immediate error if you try to dereference the pointer. It's also sometimes a good idea to clear the memory pointed to using a function like memset(). This is particularly true if the memory pointed to contains something confidential (e.g. a plaintext password) which you don't want other, possibly user facing, parts of your program from having access to.
That's undefined behaviour. Anything can happen.You were lucky this time. Or perhaps unlucky since it would be preferable to get a runtime error! Next time round maybe you'll get a runtime error.
It's not really very useful to reason about why you see a particular manifestation of undefined behaviour. It's best to stick to the well-defined behaviour about which you can reason.
C++ won't stop you from writing to an arbitrary location in memory. When you allocate memory with new or malloc, C++ finds some unused space in memory, marks it as allocated (so that it doesn't accidentally get handed out again), and gives you its address.
Once you delete that memory however, C++ marks it as free and may hand it out to anyone that asks for it. You can still write to it and read from it, but at this point, someone else might be using it. When you write to that place in memory, you may be overwriting some value you have allocated elsewhere.
Here
// why can I set 'z' when pointer is already destroyed?
z->a = 10;
z still points at a memory location.
But it no longer blongs to you. You have passed it to delete and said take care of this pointer. What it does is no longer your concern. Its like when you sell your car; it still exists but its not yours so opening the door and looking in may be possible, but it may result in the police arresting you.
Same with deleted pointers the memory exists but does not belong to you.
If you look inside it may work, but it may also cause a segmentation fault as the library has flushed the page (you never know).
delete z; just deallocates the memory z was pointing to, it does not destroy the pointer itself.
So z becomes a wild pointer.
Because deleting a block of memory does not zero the value of all pointers that point to it. Deleting memory merely makes a note that the memory is available to be allocated for some other purpose. Until that happens, the memory may appear to be intact -- but you can't count on it, and on some compiler/runtime/architecture combinations, your program will behave differently -- it may even crash.
Related
This question already has answers here:
C++ delete - It deletes my objects but I can still access the data?
(13 answers)
Closed 4 years ago.
In the below code, memory is allocated for an integer and later a shallow copy is being made and finally delete is being called on it. How does it still print 23 as the output and why doesn't the delete call on q, cause a run time exception.
#include <iostream>
using namespace std;
int main() {
int* p = new int(23);
int* q = p;
delete p;
cout << *p << endl;
delete q;
return 0;
}
Undefined behaviour means anything can happen.
It might crash.
It might crash your car.
It might crash your brain.
It might crash Sagittarius A* into your brain.
It might crash your brain into your car, then crash them both into Sagittarius A*.
It might appear to work.
But it's still undefined.
Don't expect results.
I was recently reading about stack & heap corruption in C & C++. The author of the website demonstrates stack corruption using below example.
#include<stdio.h>
int main(void)
{
int b = 10;
int a[3];
a[0] = 1;
a[1] = 2;
a[2] = 3;
printf(" b = %d \n",b);
a[3] = 12; // oops it is invalid, behaviour is undefined
printf(" b = %d \n",b);
printf("address of b= %x\n",&b);
printf("address of a[3]= %x\n",&a[3]);
return 0;
}
I tested above program on visual studio 2010 compiler (VC++) & it gives me runtime error that says:
stack around variable a gets corrupted
Now my question: is stack corrupted for lifetime or it is only for the time during when above erroneous program was being executed?
Same way, I know that deleting same pointer twice might do really bad things like heap corruption.
The following code:
int* p=new int();
delete p;
delete p; // oops disaster here, undefined behaviour
When the above code fragment executes the VC++ shows heap corruption error at runtime.
It is Undefined Behaviour. You cannot know what will happen if you do 'forbidden' things. You have no guarantee that your program will work well.
You have to be careful with terminology here. Will the stack be "corrupted" for the remainder of your program's life? It may be; it may not be. In this instance you've only corrupted data within the current stack frame, so once you're out of that function call, in practice your "corruption" will have gone.
But that's not quite the whole story. Since you've overwritten a variable with bytes that aren't supposed to be there, what knock-on effects might that have on your program? The consequences of this memory corruption could feasibly be logically passed on to other function scopes, or even other computers if you're sending this data over a network connection and the data is no longer in the expected form. (Typically, your data protocol will have safety features built into it to detect and discard unexpected forms of data; but, that's up to you.)
The same is true of heap corruption. Any time you overwrite the bytes of something that is not supposed to be overwritten, and any time you do so with arbitrary or unknowable data, you run the risk of potentially catastrophic consequences that may logically last well beyond the lifetime of your program.
Within the scope of C++ as a language, this condition is summed up in a specific phrase: undefined behaviour. It states that you can't really rely on anything at all after you've corrupted your memory. Once you've invoked UB, all bets are off.
The one guarantee that you usually have in practice is that your OS will not allow you to directly overwrite any memory that does not belong to your program. That is, corrupting the memory of other processes or of the OS itself is very difficult. The memory model of modern OSs is deliberately designed that way in order to keep programs isolated and prevent this kind of damage from broken programs and/or viruses.
C++ as well as C does not have array boundary overflow or underflow check. However, you can abstract out, you may define an array with overloaded index operator (operator []) where you can check for array index out of bounds and act accordingly. When you delete a pointer using delete ptr (when ptr is allocated through new), the space allocated before is returned back to heap space, however the value of the pointer becomes same as before. So, it;s a good programming practice that you should make the ptr NULL after delete, e.g.
int* p=new int();
...
if (p) {
delete p;
p = (int *) NULL;
}
// double deletion is prevented, and ptr us not dangling any more
if (p) {
delete p;
p = (int *) NULL;
}
However, stack or heap corruption, if at all, lies confined within the program space and when the program terminates, normally or abnormally, all memory occupies are released back to the operating system
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
If we comment out the emphasized line below we get 777 in the console.
Otherwise we get some garbage like (-534532345).
My environment is Microsoft Visual Studio 2012 Pro.
class C
{
public:
C () { x = 777; }
void ou() {cout << x;}
protected:
int x;
};
class A
{
public:
A(C & rrc) : rc(rrc) {};
void koo () {rc.ou();}
protected:
C & rc;
};
int _tmain(int argc, _TCHAR* argv[])
{
C c;
C * pc = new C;
A a(*pc);
delete pc; // <<<< this line
a.koo();
return 0;
}
Can anyone help me figure out why I am seeing this behavior?
At the point you call a.koo(), you've deleted the underlying object that its rc reference refers to. That's of course UB. What happens next is likely to have consistent behavior on a given platform for a given compilation, and may even output 777 (actually, will likely output 777 since the underlying object was very recently deleted). In your case, it seems that either the memory that had been previously allocated to _tmain()'s pc object has been reallocated to something else that has overwritten it, or else you're using a debug build whose memory allocator explicitly overwrites deleted/freed memory with some fixed value, typically nonzero and not all ones, but something that's otherwise recognizable like 0xAAAAAAAA or 0xDEADDEAD. Since -534532345 is 0xE023AF07 (or 0xFFFFFFFFE023AF07), I'm guessing it's the former (the memory has been allocated to something else which has overwritten it). Since the call to a.koo() in your example immediately follows delete pc, I find it surprising that it's already been overwritten so soon, but technically anything's possible since it's UB.
The delete leaves you with a dangling reference, which you follow, leading to undefined behaviour. It is not a hole in C++ or MS VS. The language allows for many illegal actions to go unchecked, and leave it up to the programmer not to invoke UB.
The code has Undefined Behavior. You are holding the reference of an object which no longer exists. So, rc.ou() behavior is undefined.
If you delete pc, then A::rc will point to garbage location and rc.ou(); will output garbage as well. This is expected behavior and it will do this regardless of the compiler you are using
Btw 99[.99]% of the time you think you've found a bug in a compiler, it really is your bug
I thought I was fairly good with C++, it turns out that I'm not. A previous question I asked: C++ const lvalue references had the following code in one of the answers:
#include <iostream>
using namespace std;
int& GenX(bool reset)
{
static int* x = new int;
*x = 100;
if (reset)
{
delete x;
x = new int;
*x = 200;
}
return *x;
}
class YStore
{
public:
YStore(int& x);
int& getX() { return my_x; }
private:
int& my_x;
};
YStore::YStore(int& x)
: my_x(x)
{
}
int main()
{
YStore Y(GenX(false));
cout << "X: " << Y.getX() << endl;
GenX(true); // side-effect in Y
cout << "X: " << Y.getX() << endl;
return 0;
}
The above code outputs X: 100, X:200. I do not understand why.
I played with it a bit, and added some more output, namely, a cout before the delete x; and a cout after the new x; within the reset control block.
What I got was:
before delete: 0x92ee018
after new: 0x92ee018
So, I figured that static was silently failing the update to x, and the second getX was playing with (after the delete) uninitialized memory; To test this, I added a x = 0; after the delete, before the new, and another cout to ensure that x was indeed reset to 0. It was.
So, what is going on here? How come the new returns the exact same block of memory that the previous delete supposedly free'd? Is this just because that's what the OS's memory manager decided to do, or is there something special about static that I'm missing?
Thank you!
That's just what the memory manager decided to do. If you think about it, it makes a lot of sense: You just freed an int, then you ask for an int again... why shouldn't the memory manager give you back the int you just freed?
More technically, what is probably happening when you delete is that the memory manager is appending the memory block you freed to the beginning of the free list. Then when you call new, the memory manager goes scanning through its free list and finds a suitably sized block at the very first entry.
For more information about dynamic memory allocation, see "Inside storage allocation".
To your first question:
X: 100, X:200. I do not understand why.
Since Y.my_x is just a reference to the static *x in GenX, this is exactly how it supposed it to be - both are referencing to the same address in memory, and when you change the content of *x, you get a side effect.
You are accessing the memory block that is deallocated. By the c++ standard, that is an undefined behaviour, therefore anything can happen.
EDIT
I guess I have to draw :
You allocate the memory for an int, and you pass the object allocated on the heap to the constructor of Y, which stores that in the reference
then you deallocate that memory, but your object Y still holds the reference to the deallocated object
then you access the Y object again, which holds an invalid reference, referencing deallocated object, and the result you got is the result of an undefined behaviour.
EDIT2
The answer to why : implementation defined. The compiler can create new object at any location it likes.
i test your code in VC2008, the output is X: 100, X: -17221323. I think the reason is that the static x is freed. i think my test is reasonable.
This makes perfect sense for the code.
Remember that it is your pointer that is static, so when you enter this function a second time you do not need to make a new pointer, but ever time you enter this function you are making a new int for the pointer to point to.
You are also probably in debug mode where a bit more time is spent giving you nice addresses.
Exactly why the int that your pointer points to is in the same space is probably just down to pure luck, that and you are not declaring any other variables before it so the same space in memory is still free
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How could pairing new[] with delete possibly lead to memory leak only?
I was always told that it's not safe to call delete on an array allocated with new[]. You should always pair new with delete and new[] with delete[].
So I was surprised to discover that the following code compiles and runs ok, in both Debug and Release mode under VS2008.
class CBlah
{
public:
CBlah() : m_i(0) {}
private:
int m_i;
};
int _tmain(int argc, _TCHAR* argv[])
{
for(;;)
{
CBlah * p = new CBlah[1000]; // with []
delete p; // no []
}
return 0;
}
It took me a while to figure out why this works at all, and I think it's just luck and some undefined behaviour.
BUT... it made me wonder... why doesn't Visual Studio pick this up, at least in the Debug memory manager? Is it because there's lots of code out there that makes this mistake and they don't want to break it, or do they feel it's not the job of the Debug memory manager to catch this kind of mistake?
Any thoughts? Is this kind of misuse common?
It will certainly compile ok, because there is no information in the pointer (compile-time) which will see if pointer points to array or what. For example:
int* p;
cin>>x;
if(x == 0)
p = new int;
else
p = new int [10];
delete p; //correct or not? :)
Now , about running ok. This is called undefined behavior in C++, that is, there is no guarantee what will happen - everything can run OK, you can get a segfault, you can get just wrong behavior, or your computer may decide to call 911. UB <=> no guarantee
It's undefined behavior and everything is fair in love, war and undefined behavior...:)
According to MDSN, it translates delete to delete[] when trying to delete an array. (See there, for instance). Though you should have a warning after compiling.
The reason the Debug Memory Manager does not pick up on this error is probably because it it not implemented at the level of new/delete, but at the level of the memory manager that gets invoked by new/delete to allocate the required memory.
At that point, the distinction between array new and scalar new is gone.
You can read these SO answers and links about delete and delete[]: About delete, operator delete, delete[], ...
I don't know what makes you think it "works ok". It compiles and completes without crashing. That does not mean necessarily there was no leak or heap corruption. Also if you got away with it this time, it doesn't necessarily make it a safe thing to do.
Sometimes even a buffer overwrite is something you will "get away with" because the bytes you have written to were not used (maybe they are padding for alignment). Still you should not go around doing it.
Incidentally new T[1] is a form of new[] and still requires a delete[] even though in this instance there is only one element.
Interesting point.
Once I did a code review and tried to convince programmers to fix new[]-delete mismatch.
I've argumented with "Item 5" from Effective C++ by Scott Meyers. However, they argumented with "What do you want, it works well!" and proved, that there was no memory leakage.
However, it worked only with POD-types. Looks like, MS tries to fix the mismatch as pointed out by Raveline.
What would happen, if you added a destructor?
#include <iostream>
class CBlah
{
static int instCnt;
public:
CBlah() : m_i(0) {++instCnt;}
~CBlah()
{
std::cout <<"d-tor on "<< instCnt <<" instance."<<std::endl;
--instCnt;
}
private:
int m_i;
};
int CBlah::instCnt=0;
int main()
{
//for(;;)
{
CBlah * p = new CBlah[10]; // with []
delete p; // no []
}
return 0;
}
Whatever silly"inteligence" fix is added to VS, the code is not portable.
Remember that "works properly" is within the universe of "undefined behavior". It is quite possible for a particular version of a particular compiler to implement this in such a way that it works for all intents and purposes. The important thing to remember is that this is not guaranteed and you can't really ever be sure it's working 100%, and you can't know that it will work with the next version of the compiler. It's also not portable, since another compiler might work in a different fashion.
This works because the particular C++ runtime library it was linked with uses the same heap for both operator new and operator new[]. Many do, but some don't, which is why the practice is not recommended.
The other big difference is that if CBlah had a non-trivial destructor, the delete p; would only call it for the first object in the array, whereas delete[] p; is sure to call it for all the objects.