Is it possible to zero out the memory of deleted objects in C++? I want to do this to reproduce a coredump in unit test:
//Some member variable of object-b is passed-by-pointer to object-a
//When object-b is deleted, that member variable is also deleted
//In my unit test code, I want to reproduce this
//even if I explicitly call delete on object-b
//accessBMemberVariable should coredump, but it doesn't
//I'm assuming even though object-b is deleted, it's still intact in memory
A *a = new A();
{
B *b = new B(a);
delete b;
}
a->accessBMemberVariable();
You probably should override the delete operator.
Example for the given class B:
class B
{
public:
// your code
...
// override delete
void operator delete(void * p, size_t s)
{
::memset(p, 0, s);
::operator delete(p, s);
}
};
EDIT: Thanks litb for pointing this out.
accessBMemberVariable should coredump, but it doesn't
Nah, why should it? It's quite possible that the memory that b used to occupy is now owned by the CRT, the CRT that your application owns. The CRT may opt to not release memory back to the OS. Core dumps will only happen if you access memory not owned by your application.
Zeroing out the memory occupied by b may not do you any good depending on the type of variable that A has the address of.
My advice would be to allocate B on the stack, that should bring out the fireworks... but then again, not quite in the way you'd expect...
So if you really want a core dump you should use the OS functions to allocate memory and free it:
char *buf = OS_Alloc(sizeof(B));
B *b = new(buf) B();
a->someBMember = &b->myMember;
b->~B();
OS_Free(buf);
a->accessBMemberVariable();
Another poster suggested:
delete b;
memset(b,0,sizeof(B));
Please don't do this!!! Writes to address space that is returned to the memory manager are UNDEFINED!!!!
Even if your compiler and library let you get away with it now, it is bad bad bad. A change in library or platform, or even an update in the compiler will bite you in the ass.
Think of a race condition where you delete b, then some other thread makes an allocation, the memory at b is given out, and then you call memset! Bang, you're dead.
If you must clear the memory (which who cares) zero it out before calling delete.
memset(b,0,sizeof(B));
delete b;
Use placement "new" if you can (http://www.parashift.com/c++-faq-lite/dtors.html#faq-11.10)
and zero out the chunk you gave after calling the object destructor manually.
Use the debugging malloc/new features in your environment.
On MSVC, link with the debug runtime libraries. On FreeBSD, set MALLOC_OPTIONS to have the 'Z' or 'J' flags, as appropriate. On other platforms, read the documentation or substitute in an appropriate allocator with debugging support.
Calling memset() after deletion is just bad on so many levels.
In your example you write
'A->accessBMemberVariable' is this a typo? shouldn't it be 'a->accessBMemberVariable' ?
Assuming it is a typo (otherwise the whole design seems a bit weird).
If you want to verify that 'a' is deleted properly it probably be better to instead change the way you handle the allocation and use auto_ptr's instead. That way you will be sure things are deleted properly:
auto_ptr<A> a( new A );
{
auto_ptr<B> b( new B(a) ); // B takes ownership of 'a', delete at scope exit
}
a->accessBMemberVariable(); // shouldn't do to well.
and a constructor for B in the form of
B( auto_ptr<A>& a ) : m_a(a) {;}
where
auto_ptr<A> m_a
Once you've deleted b you don't really have permission to write over where it was. But you usually can get away with doing just that; and you'll see code where programmers use memset for this.
The approved way to do this, though, would be to call the destructor directly, then write over the memory (with, say, memset) and then call delete on the object. This does require your destructor to be pretty smart because delete is going to call the destructor. So the destructor must realize that the whole object is nothing but 0's and not do anything:
b->~B();
memset(b, 0, sizeof(b));
delete b;
Related
This is a question that's been nagging me for some time. I always thought that C++ should have been designed so that the delete operator (without brackets) works even with the new[] operator.
In my opinion, writing this:
int* p = new int;
should be equivalent to allocating an array of 1 element:
int* p = new int[1];
If this was true, the delete operator could always be deleting arrays, and we wouldn't need the delete[] operator.
Is there any reason why the delete[] operator was introduced in C++? The only reason I can think of is that allocating arrays has a small memory footprint (you have to store the array size somewhere), so that distinguishing delete vs delete[] was a small memory optimization.
It's so that the destructors of the individual elements will be called. Yes, for arrays of PODs, there isn't much of a difference, but in C++, you can have arrays of objects with non-trivial destructors.
Now, your question is, why not make new and delete behave like new[] and delete[] and get rid of new[] and delete[]? I would go back Stroustrup's "Design and Evolution" book where he said that if you don't use C++ features, you shouldn't have to pay for them (at run time at least). The way it stands now, a new or delete will behave as efficiently as malloc and free. If delete had the delete[] meaning, there would be some extra overhead at run time (as James Curran pointed out).
Damn, I missed the whole point of question but I will leave my original answer as a sidenote. Why we have delete[] is because long time ago we had delete[cnt], even today if you write delete[9] or delete[cnt], the compiler just ignores the thing between [] but compiles OK. At that time, C++ was first processed by a front-end and then fed to an ordinary C compiler. They could not do the trick of storing the count somewhere beneath the curtain, maybe they could not even think of it at that time. And for backward compatibility, the compilers most probably used the value given between the [] as the count of array, if there is no such value then they got the count from the prefix, so it worked both ways. Later on, we typed nothing between [] and everything worked. Today, I do not think delete[] is necessary but the implementations demand it that way.
My original answer (that misses the point):
delete deletes a single object. delete[] deletes an object array. For delete[] to work, the implementation keeps the number of elements in the array. I just double-checked this by debugging ASM code. In the implementation (VS2005) I tested, the count was stored as a prefix to the object array.
If you use delete[] on a single object, the count variable is garbage so the code crashes. If you use delete for an object array, because of some inconsistency, the code crashes. I tested these cases just now !
"delete just deletes the memory allocated for the array." statement in another answer is not right. If the object is a class, delete will call the DTOR. Just place a breakpoint int the DTOR code and delete the object, the breakpoint will hit.
What occurred to me is that, if the compiler & libraries assumed that all the objects allocated by new are object arrays, it would be OK to call delete for single objects or object arrays. Single objects just would be the special case of an object array having a count of 1. Maybe there is something I am missing, anyway.
Since everyone else seems to have missed the point of your question, I'll just add that I had the same thought some year ago, and have never been able to get an answer.
The only thing I can think of is that there's a very tiny bit of extra overhead to treat a single object as an array (an unnecessary "for(int i=0; i<1; ++i)" )
Adding this since no other answer currently addresses it:
Array delete[] cannot be used on a pointer-to-base class ever -- while the compiler stores the count of objects when you invoke new[], it doesn't store the types or sizes of the objects (as David pointed out, in C++ you rarely pay for a feature you're not using). However, scalar delete can safely delete through base class, so it's used both for normal object cleanup and polymorphic cleanup:
struct Base { virtual ~Base(); };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
Base* b = new Derived[2];
delete[] b; // bad! undefined behavior
}
However, in the opposite case -- non-virtual destructor -- scalar delete should be as cheap as possible -- it should not check for number of objects, nor for the type of object being deleted. This makes delete on a built-in type or plain-old-data type very cheap, as the compiler need only invoke ::operator delete and nothing else:
int main(){
int * p = new int;
delete p; // cheap operation, no dynamic dispatch, no conditional branching
}
While not an exhaustive treatment of memory allocation, I hope this helps clarify the breadth of memory management options available in C++.
Marshall Cline has some info on this topic.
delete [] ensures that the destructor of each member is called (if applicable to the type) while delete just deletes the memory allocated for the array.
Here's a good read: http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=287
And no, array sizes are not stored anywhere in C++. (Thanks everyone for pointing out that this statement is inaccurate.)
I'm a bit confused by Aaron's answer and frankly admit I don't completely understand why and where delete[] is needed.
I did some experiments with his sample code (after fixing a few typos). Here are my results.
Typos:~Base needed a function body
Base *b was declared twice
struct Base { virtual ~Base(){ }>; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
<strike>Base</strike> b = new Derived[2];
delete[] b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
david#Godel: # No error message
Modified program with delete[] removed
struct Base { virtual ~Base(){}; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
b = new Derived[2];
delete b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
atest(30746) malloc: *** error for object 0x1099008c8: pointer being freed was n
ot allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Of course, I don't know if delete[] b is actually working in the first example; I only know it does not give a compiler error message.
Why the ctor and dtor are not getting invoked even though the memory is allocated or freed? What is actually happening here? Please share your thoughts.
#include<iostream>
#include<stdlib.h>
using namespace std;
class a{
public:
int i;
a() {cout<<"\n a ctor \n";}
~a(){cout<<"\n a dtor \n";}
};
main() {
a *ap = NULL;
ap = (a*)malloc(sizeof(a));
ap->i = 11;
cout<<ap->i<<"\n";
cout<<ap<<"\n";
free(ap); //does this actually work? Does this free the memory?
cout<<ap<<"\n";
ap = NULL;
cout<<ap;
}
does the above mean ctor and dtor are not useful or they are just useless?
Everything is ok here.
The constructor/destructor are just normal functions inside.
What malloc does: Reserve xy bytes of memory.
What new does: Call malloc (or something like that), then call the constructor.
Malloc shouldn´t call any constructor (and it can´t,
because it doesn´t know which one. It does know only a byte count).
If you want to handle memory stuff manually and then just call the constructor,
see "placement new"
There's nothing in malloc to trigger calling your descriptor; malloc is not for allocating objects it's for allocating a buffer for general use (and not normally for c++). Since malloc is a C library and knows nothing of C++, it would seem a bit off for it to consider calling a C++ constructor -- especially considering constructors can have arguments and malloc has no way to receive those.
If you have a valid reason to use malloc to allocate what will become an object, you are still responsible for ensuring the constructor and destructor get called. You do that with new and delete but your new call is modified to be a "placement new". it's extremely rare that you will have a legitimate use for this in conjunction with malloc, but its use in your example would be:
void *ap_addr = void*)malloc(sizeof(a));
ap = new(ap_addr)a();
ap->i = 11;
Note that you are now responsible for calling both delete (to get the destructor called), and free() (to release the buffer). Of course, buffer release is optional if you're going to reuse it, for example.
It should be int main
What is the problem of using new>
Do not mix new/malloc with delete/free
use nullptr not NULL
I'm trying to answer some past paper questions that I've been given for exam practice but not really sure on these two, any help be greatly appreciated. (Typed code up from image, think it's all right).
Q1: Identify the memory leaks in the C++ code below and explain how to fix them. [9 marks]
#include <string>
class Logger {
public:
static Logger &get_instance () {
static Logger *instance = NULL;
if (!instance){
instance = new Logger();
}
return *instance;
}
void log (std::string const &str){
// ..log string
}
private:
Logger(){
}
Logger(Logger const&) {
}
Logger& operator= (Logger const &) {
}
~Logger() {
}
};
int main(int argcv, char *argv[]){
int *v1 = new int[10];
int *v2 = new int[20];
Logger::get_instance() . log ("Program Started");
// .. do something
delete v1;
delete v2;
return 0;
}
My answer is that if main never finishes executing due to an early return or an exception being thrown that the deletes will never run causing the memory to never be freed.
I've been doing some reading and I believe an auto_ptr would solve the problems? Would this be as simple as changing lines to?? :
auto_ptr<int> v1 = new int[10];
auto_ptr<int> v2 = new int[20];
v1.release();
delete v1;
Q2: Why do virtual members require more memory than objects of a class without virtual members?
A: Because each virtual member requires a pointer to be stored also in a vtable requiring more space. Although this equates to very little increase in space.
Q1: Note that v1 and v2 are int pointers that refer to an array of 10 and 20, respectively. The delete operator does not match - ie, since it is an array, it should be
delete[] v1;
delete[] v2;
so that the whole array is freed. Remember to always match new[] and delete[] and new and delete
I believe you're already correct on Q2. The vtable and corresponding pointers that must be kept track of do increase the memory consumption.
Just to summarize:
the shown program has undefined behavior using incorrect form of delete, so talking about leaks for the execution is immaterial
if the previous was fixed, leaks wold come from:
new Logger(); // always
the other two new uses, if subsequent new throws or string ctor throws or the ... part in log throws.
to fix v1 and v2 auto_ptr is no good ad you allocated with new[]. you could use boost::auto_array or better make v array<int, 10> or at least vector<int>. And you absolutely don't use release() and then manual delete, but leade that to the smart pointer.
fixing instance is interesting. What is presented is called the 'leaky singleton' that is supposed to leak the instance. But be omnipresent after creation in case something wants to use it during program exit. If that was not intended, instance shall not be created using new, but be directly, being local static or namespace static.
the question is badly phrased comparing incompatible things. Assuming it is sanitized the answer is that a for a class with virtual members instances are (very likely) to carry an extra pointer to the VMT. Plus the the VMT itself has one entry per virtual member after some general overhead. The latter is indeed insignificant, but the former may be an issue, as a class with 1 byte of state may pick up a 8 byte pointer, and possibly another 7 bytes of padding.
Your first answer is correct to get credit, but what the examiner was probably looking for is the freeing up of Logger *instance
In the given code, memory for instance is allocated, but never deallocated.
The second answer looks good.
instance is never deleted and you need to use operator delete[] in main().
Q1:
few gotchyas -
singleton pattern is very dangerous, for example it is not thread safe, two threads could come in and create two classes - causing a memory leak, surround with EnterCriticalSection or some other thread sync mechanism, and still unsafe and not recommended to use.
singleton class does not release they memory, singleton should be ref counted to really act properly.
you're using a static variable inside the function, even worse than using a static member for the class.
you allocate with new [] and delete without the delete[]
I suspect your question is two things:
- free the singleton pointer
- use delete[]
In general however the process cleanup will clean the dangling stuff..
Q2:
your second question is right, because virtual members require a vtable which makes the class larger
I have a class MyClassA. In its constructur, I am passing the pointer to instance of class B. I have some very basic questions related to this.
(1) First thing , is the following code correct? ( the code that makes a shallow copy and the code in methodA())
MyClassA::MyClassA(B *b){
this.b = b;
}
void MyClassA::methodA(){
int i;
i = b.getFooValue();
// Should I rather be using the arrow operator here??
// i = b->getFooValue()
}
(2) I am guessing I don't need to worry about deleting memory for MyClassA.b in the destructor ~MyClassA() as it is not allocated. Am I right?
thanks
Update: Thank you all for your answers! MyclassA is only interested in accessing the methods of class B. It is not taking ownership of B.
You need the arrow operator since b is a pointer.
Yes, unless the user of MyClassA expects to take the ownership of b. (You can't even be sure if b is a stack variable where delete-ing it will may the code crash.)
Why don't you use a smart pointer, or even simpler, a reference?
First thing , is the following code
correct? ( the code that makes a
shallow copy and the code in
methodA())
The answer depends upon who owns the responsibility of the B object's memory. If MyClassA is supposed just to store the pointer of A without holding the responsibility to delete it then it is fine. Otherwise, you need to do the deep copy.
I am guessing I don't need to worry
about deleting memory for MyClassA.b
in the destructor ~MyClassA() as it is
not allocated. Am I right?
Again depends on how memory for B is allocated. Is it allocated on stack or heap? If from stack then you need not explicitly free it in destructor of MyClassA, otherwise you need to to delete it.
1) . It depends on the life time of the pointer to B.
Make sure the when you call b->getFooValue(); b should be a valid pointer.
I will suggest use of initilization list and if you are only reading the value of the B object though it pointer then make it pointer to constant data.
MyClassA::MyClassA(const B *bObj) : b(bObj)
{}
2). As long as B is on the stack on need to delete it and if it is allocated to heap then it must be deleted by it the owner else you will have memory leak.
You can use smart pointer to get rid of the problem.
MyClassA::MyClassA(B *b){
this.b = b;
}
should be:
MyClassA::MyClassA(B *b){
this->b = b;
}
because this is treated as a pointer.
1)
this.b = b;
Here you pass a pointer to an instance of B. As Mac notes, this should be:
this->b = b;
b.getFooValue();
This should be b->getFooValue(), because MyClassA::b is a pointer to B.
2) This depends of how you define what MyClassA::b is. If you specify (in code comments) that MyClassA takes over ownership over the B instance passed in MyClassA's constructor, then you'll need to delete b in MyClassA's destructor. If you specify that it only keeps a reference to b, without taking over the ownership, then you don't have to.
PS. Regrettably, in your example there is no way to make ownership explicit other than in code documentation.
This is a question that's been nagging me for some time. I always thought that C++ should have been designed so that the delete operator (without brackets) works even with the new[] operator.
In my opinion, writing this:
int* p = new int;
should be equivalent to allocating an array of 1 element:
int* p = new int[1];
If this was true, the delete operator could always be deleting arrays, and we wouldn't need the delete[] operator.
Is there any reason why the delete[] operator was introduced in C++? The only reason I can think of is that allocating arrays has a small memory footprint (you have to store the array size somewhere), so that distinguishing delete vs delete[] was a small memory optimization.
It's so that the destructors of the individual elements will be called. Yes, for arrays of PODs, there isn't much of a difference, but in C++, you can have arrays of objects with non-trivial destructors.
Now, your question is, why not make new and delete behave like new[] and delete[] and get rid of new[] and delete[]? I would go back Stroustrup's "Design and Evolution" book where he said that if you don't use C++ features, you shouldn't have to pay for them (at run time at least). The way it stands now, a new or delete will behave as efficiently as malloc and free. If delete had the delete[] meaning, there would be some extra overhead at run time (as James Curran pointed out).
Damn, I missed the whole point of question but I will leave my original answer as a sidenote. Why we have delete[] is because long time ago we had delete[cnt], even today if you write delete[9] or delete[cnt], the compiler just ignores the thing between [] but compiles OK. At that time, C++ was first processed by a front-end and then fed to an ordinary C compiler. They could not do the trick of storing the count somewhere beneath the curtain, maybe they could not even think of it at that time. And for backward compatibility, the compilers most probably used the value given between the [] as the count of array, if there is no such value then they got the count from the prefix, so it worked both ways. Later on, we typed nothing between [] and everything worked. Today, I do not think delete[] is necessary but the implementations demand it that way.
My original answer (that misses the point):
delete deletes a single object. delete[] deletes an object array. For delete[] to work, the implementation keeps the number of elements in the array. I just double-checked this by debugging ASM code. In the implementation (VS2005) I tested, the count was stored as a prefix to the object array.
If you use delete[] on a single object, the count variable is garbage so the code crashes. If you use delete for an object array, because of some inconsistency, the code crashes. I tested these cases just now !
"delete just deletes the memory allocated for the array." statement in another answer is not right. If the object is a class, delete will call the DTOR. Just place a breakpoint int the DTOR code and delete the object, the breakpoint will hit.
What occurred to me is that, if the compiler & libraries assumed that all the objects allocated by new are object arrays, it would be OK to call delete for single objects or object arrays. Single objects just would be the special case of an object array having a count of 1. Maybe there is something I am missing, anyway.
Since everyone else seems to have missed the point of your question, I'll just add that I had the same thought some year ago, and have never been able to get an answer.
The only thing I can think of is that there's a very tiny bit of extra overhead to treat a single object as an array (an unnecessary "for(int i=0; i<1; ++i)" )
Adding this since no other answer currently addresses it:
Array delete[] cannot be used on a pointer-to-base class ever -- while the compiler stores the count of objects when you invoke new[], it doesn't store the types or sizes of the objects (as David pointed out, in C++ you rarely pay for a feature you're not using). However, scalar delete can safely delete through base class, so it's used both for normal object cleanup and polymorphic cleanup:
struct Base { virtual ~Base(); };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
Base* b = new Derived[2];
delete[] b; // bad! undefined behavior
}
However, in the opposite case -- non-virtual destructor -- scalar delete should be as cheap as possible -- it should not check for number of objects, nor for the type of object being deleted. This makes delete on a built-in type or plain-old-data type very cheap, as the compiler need only invoke ::operator delete and nothing else:
int main(){
int * p = new int;
delete p; // cheap operation, no dynamic dispatch, no conditional branching
}
While not an exhaustive treatment of memory allocation, I hope this helps clarify the breadth of memory management options available in C++.
Marshall Cline has some info on this topic.
delete [] ensures that the destructor of each member is called (if applicable to the type) while delete just deletes the memory allocated for the array.
Here's a good read: http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=287
And no, array sizes are not stored anywhere in C++. (Thanks everyone for pointing out that this statement is inaccurate.)
I'm a bit confused by Aaron's answer and frankly admit I don't completely understand why and where delete[] is needed.
I did some experiments with his sample code (after fixing a few typos). Here are my results.
Typos:~Base needed a function body
Base *b was declared twice
struct Base { virtual ~Base(){ }>; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
<strike>Base</strike> b = new Derived[2];
delete[] b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
david#Godel: # No error message
Modified program with delete[] removed
struct Base { virtual ~Base(){}; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
b = new Derived[2];
delete b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
atest(30746) malloc: *** error for object 0x1099008c8: pointer being freed was n
ot allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Of course, I don't know if delete[] b is actually working in the first example; I only know it does not give a compiler error message.