This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How does delete[] “know” the size of the operand array?
How does the delete in C++ know how many memory locations to delete
I know it's a rather simple question but I a not sure about the difference (if any) between this lines :
double * a = new double[100];
delete[] a;
delete a;
free ((void*)a);
First off, would all of these calls (used each without the others) work the same way and free sizeof(double)*100 bytes?
Which lead me to the 2nd question, how does the program keep track of the size of the allocated memory? For instance if I send my a pointer to a function, then delete[] this pointer from within my function, would I also free the same amount of memory?
Thanks
The difference, oversimplified, is this:
delete[] a;
Is correct. All others are incorrect, and will exhibit Undefined Behavior.
Now, in reality, on all the compilers I use daily, delete a; will do the right thing every time. But you should still not do it. Undefined Behavior is never correct.
The free call will also probably do the right thing in the real world, but only because the thing you're freeing doesn't have a non-default destructor. If you tried to free something that was a class with a destructor, for example, it definitely wouldn't work -- the destructor would never be called.
That's one of the big differences (not the only difference) between new/delete and malloc/free -- the former call the constructors and destructors, while the latter meerly allocate and dealocate space.
Incorporating something #Rob said in his now-deleted post:
The simple rule is this: every new[] requires exactly one delete[].
Every new requires exactly one delete. malloc requires free. No
mix-and-match is allowed.
As to the question of how delete[] knows how many elements to delete, please see this response to a previous duplicate question.
Someone correct me, if I'm wrong, but as far as I understand, you use delete when you previously allocated memory with new and delete[] after using new Type[]. And free is used when you have allocated memory using malloc.
See c++ reference on delete and free.
Regarding the array of doubles, the result of all forms is the same -- all the allocated memory is returned to the system. The difference in calling free vs. delete vs. delete[] is:
free only releases the memory (the size of memory allocated for a was stored by memory manager when calling new)
delete calls the destructor of allocated object before releasing the memory
delete[] calls the destructor of each element in the array before releasing the memory
The difference is important if destructor of the allocated object contains cleanup code which releases other memory or system resource such as file or socket descriptor allocated during the lifetime of an object.
It is a good habit in C++ to allways use deleteon single instances and delete[] on array of objects/primitives regardless of the content of destructor.
Related
I have the following question:
If I use malloc in a method, return the pointer to my main, and free the pointer in my main, do i have successfully freed the memory or not? And is this bad programming style, if i do so?
int* mallocTest(int size)
{
int * array = (int*) malloc(size);
return array;
}
int main ()
{
int* pArray = mallocTest(5);
free (pArray);
return 0;
}
EDIT: The main purpose of this question is that I want to know, if I freed the memory successfully (if i use the right "combination" of malloc-free/new[]-delete[]) when i split this into the method and the main function!
EDIT2: Changed code and topic, to lead to the intended point of the question
Mixing malloc freeing it with delete is explained in other answers.
I feel you want to know if malloc memory allocated in a method, return the pointer to main, and free the pointer in main will work or not ? Yes it can be done and free will clear the memory allocated in other methods provided you have the pointer pointing to that location.
No. Use free to free memory allocated with malloc, delete for single objects allocations with new and delete [] when using new on arrays.
Mixing and matching may appear to work (it's undefined behaviour, and "undefined" included "works fine" and "sort of works fine most of the time, but crashes on thursdays in months starting with M on days that are divisible with 3 or 7 and the operator has shirt with stripes") - and may indeed work on SOME types of systems, but fail on others, depending on exactly how malloc and new and their respective free and delete functions are implemented.
It is fine to call a function that returns a pointer to some memory that is later freed with the appropriate call. It is "nicer" if you actually implement a pair of functions, where one allocates and the other destroys the data. This is particularly important if the data-structure allocated isn't trivial (e.g. you have allocations inside the outer allocation).
Also consider what happens if you decide that "Oh, I'd like to use new int[size]; instead of malloc(size * sizeof(int)); in the mallocTest()". Now every place that calls mallocTest() will have to change so that it calls delete [] instead of free that you corrected it to after reading this answer.
[Just spotted that your code is broken, and probably won't compile, certainly won't allocate space: (int *)malloc[size]; doesn't do what you want it to do, and I'm pretty sure is illegal, as indexing a function is invalid]
And finally, the "best" solution is wrap all allocations in an object, such that the destructor of that object destroys the data allocated within the object. So, for example, use std::vector<int> instead of allocating with malloc.
No - thats undefined bahaviour which means it might look like it works but actually it does not, for malloc() you should always use free(). Use delete[] only for memory allocated with new[].
You can actually check it yourself, new[] calls void* operator new(size_t) method which should be somewhere declared in your platform headers. The easiest way is to spy on whay it does with debugger, under VS2005 it calls in the end HeapAlloc function.
For deallocation you have void operator delete[](void*) which also must be defined somwhere. On VS2005 it calls HeapFree.
I checked what malloc/free calls, and those are also HeapAlloc and HeapFree.
So in my case it looks like it would work, because malloc looks like its implemented in the same way as new[]. But the point is that there is no magic here, new[] should be paired with delete[], malloc() with free(), because you never know how those are implemented on given platform.
When you dynamically allocate memory either using malloc or new, you
are "reserving" a part of the heap memory for a particular purpose.
The memory will remain "reserved" until you return it to the heap
using free or delete (depending on what you used for allocation).
That being said, you can allocate memory from anywhere in the program
and f*ree it from anywhere*. it's important however to be sure and do
both if you forget to free the allocated memory you get memory leaks
Actually, you should use free with malloc, delete with new, but to me, it is not because of undefinedness, that it may blow a nuclear bomb, invoke nasal demons or whatever. (Or simply, maintenance nightmares) malloc and new don't do the same thing at all. To simplify what is actually a bit more complicated:
malloc, inherited from C, allocates a chunk of memory. Period.
new T allocates a correctly-sized chunk of memory intended to store an object of type T (possibly through malloc), and executes the object's constructor.
Conversely:
delete ptr executes the destructor of the object pointed-to by ptr and releases the related chunk of memory.
free(ptr) releases the chunk of memory. Period.
For the universe not to fall apart, every call to a constructor must match a call to the destructor. That's a guarantee of the language. (and one of the greatest strengths of C++)
That's why every call to malloc must match a call to free, because free was made to undo what malloc did. And every call to new must match a call to a delete because delete was made to undo what newdid.
This question already has answers here:
Is delete[] equal to delete?
(6 answers)
Closed 9 years ago.
In C, free() is used to release the memory, say free(ptr). As I understand, extra memory is allocated before ptr in the library code to store the block size information. After free() is called, the whole block is tracked and then released.
In C++, there are two forms of new and delete. One is for array. If new[] is used, delete[] should be used. For example,
int ptr = new [10];
delete [] ptr;
Question 1: can I use delete ptr here? If that is OK, what if delete ptr + 2?
Question 2: If delete[] has to be used to match new[], why do we need two forms of delete? Just one form, say delete, would be enough.
Thanks for all the suggestions!
Thank Mgetz. Question 2 should be: why c++ standard proposed both delete [] and delete if there is only one correct form at any situation?
Q1: You can use delete, but it is wrong.
This will usually "work" insofar as it will correctly free the allocated memory, but it will not call destructors properly. For trivial types, you will often not see any difference, but that doesn't mean it isn't wrong anyway. In any case it is undefined behavior which you should avoid if you can (invoking UB forfeits any guarantees that your code will work, it might of course still work, but you can never be 100% sure).
Deleting ptr+2 is also undefined behavior and will almost certainly not "work", not even a little. Usually, this will simply result in a program crash.
Q2: You need the two because they mean different things. One means "delete this pointer-to-single-object" whereas the other means "delete this pointer-to-array-of-objects".
Obviously, the compiler needs to generate different code for those different things.
You need the two forms because, unlike malloc and free, new and delete do more than just allocate and deallocate memory; they also construct and destruct the object(s) respectively.
new and delete deal with scalar objects, while new[] and delete[] deal with arrays of objects.
When you call new T[n], it'll allocate enough memory for n copies of T, and then construct n instances within the allocated memory. Similarly, calling delete[] will cause destruction of those n instances followed by deallocation.
Obviously, since you do not pass n to delete[] that information is being stashed away somewhere by the implementation, but the standard doesn't require an implementation to destroy all n objects if you call delete instead. The implementation could just destroy the first object, it might behave correctly and destroy all n objects or it might cause demons to fly out of your nose.
Simply put, it's undefined behavior, there's no telling what'll happen, it's best imperative you avoid it.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why [] is used in delete ( delete [] ) to free dynamically allocated array?
Why does C++ still have a delete[] AND a delete operator?
I'm wondering what's their difference and I know the obvious answer some might say, that one is to delete an array and the other is to delete a single object but I'm wondering why should there be two different deletion methods for these two operations? I mean delete is basically implemented using C free method which doesn't care if the pointer is actually pointing toward an array or a single object. The only reason I can think of is two be able to know if it's an array and call destructor for each cell instead of only the first object but that wouldn't also be possible since compiler can not guess the length of array just looking at it's pointer. By the way though it's said to invoke undefined behavior to call delete for memory allocated with new[] I can't imagine anything that could possibly go wrong.
As you have discovered the compiler needs to know the length of an array (at least for non-trivial types) to be able to call destructors for each element. For this new[] typically allocates some extra bytes to record the element count and returns a pointer to the end of this bookkeeping area.
When you use delete[] the compiler will look at the memory before the array to find the count and adjust the pointer, so that the originally allocated block is freed.
If you use delete to destroy a dynamically allocated array, destructors for elements (except the first) won't be called and typically this will end up attempting to free a pointer that doesn't point to the beginning of an allocated block, which may corrupt the heap.
but that wouldn't also be possible since compiler can not guess the
length of array just looking at it's pointer
That's not really true. The compiler itself doesn't need to guess anything, but it does decide which function to call to free the memory based on the operator it sees. There is a separate function dedicated to releasing arrays, and this function does indeed know the length of the array to be freed so it can appropriately call destructors.
It knows the length of the array because typically new[] allocates memory that includes the array length (since this is known on allocation) and returns a pointer to just the "usable" memory allocated. When delete[] is called it knows how to access this memory based on the pointer to the usable part of the array that was given.
When you allocate memory using new[], the compiler not only needs to construct each element, it also needs to keep track of how many elements have been allocated. This is needed for delete[] to work correctly.
Since new and delete operate on scalars, they don't need to do that, and could save on a little bit of overhead.
There is absolutely no requirement for new to be compatible with delete[] and vice versa. Mixing the two is undefined behaviour.
Ignoring programming style and design, is it "safe" to call delete on a variable allocated on the stack?
For example:
int nAmount;
delete &nAmount;
or
class sample
{
public:
sample();
~sample() { delete &nAmount;}
int nAmount;
}
No, it is not safe to call delete on a stack-allocated variable. You should only call delete on things created by new.
For each malloc or calloc, there should be exactly one free.
For each new there should be exactly one delete.
For each new[] there should be exactly one delete[].
For each stack allocation, there should be no explicit freeing or deletion. The destructor is called automatically, where applicable.
In general, you cannot mix and match any of these, e.g. no free-ing or delete[]-ing a new object. Doing so results in undefined behavior.
Well, let's try it:
jeremy#jeremy-desktop:~$ echo 'main() { int a; delete &a; }' > test.cpp
jeremy#jeremy-desktop:~$ g++ -o test test.cpp
jeremy#jeremy-desktop:~$ ./test
Segmentation fault
So apparently it is not safe at all.
Keep in mind that when you allocate a block of memory using new (or malloc for that matter), the actual block of memory allocated will be larger than what you asked for.
The memory block will also contain some bookkeeping information so that when you free the block, it can easily be put back into the free pool and possibly be coalesced with adjacent free blocks.
When you try to free any memory that you didn't receive from new, that bookkeeping information wont be there but the system will act like it is and the results are going to be unpredictable (usually bad).
Yes, it is undefined behavior: passing to delete anything that did not come from new is UB:
C++ standard, section 3.7.3.2.3:
The value of the first argument supplied to one of thea deallocation functions provided in the standard library may be a null pointer value; if so, and if the deallocation function is one supplied in the standard library, the call to the deallocation function has no effect. Otherwise, the value supplied to operator delete(void*) in the standard library shall be one of the values returned by a previous invocation of either operator new(std::size_t) or operator new(std::size_t, const std::nothrow_t&) in the standard library.
The consequences of undefined behavior are, well, undefined. "Nothing happens" is as valid a consequence as anything else. However, it's usually "nothing happens right away": deallocating an invalid memory block may have severe consequences in subsequent calls to the allocator.
After playing a bit with g++ 4.4 in windows, I got very interesting results:
calling delete on a stack variable doesn't seem to do anything. No errors throw, but I can access the variable without problems after deletion.
Having a class with a method with delete this successfully deletes the object if it is allocated in the heap, but not if it is allocated in the stack (if it is in the stack, nothing happens).
Nobody can know what happens. This invokes undefined behavior, so literally anything can happen. Don't do this.
No,
Memory allocated using new should be deleted using delete operator
and that allocated using malloc should be deleted using free.
And no need to deallocate the variable which are allocated on stack.
An angel loses its wings... You can only call delete on a pointer allocated with new, otherwise you get undefined behavior.
here the memory is allocated using stack so no need to delete it exernally but if you have allcoted dynamically
like
int *a=new int()
then you have to do delete a and not delete &a(a itself is a pointer), because the memory is allocated from free store.
You already answered the question yourself. delete must only be used for pointers optained through new. Doing anything else is plain and simple undefined behaviour.
Therefore there is really no saying what happens, anything from the code working fine through crashing to erasing your harddrive is a valid outcome of doing this. So please never do this.
It's UB because you must not call delete on an item that has not been dynamically allocated with new. It's that simple.
Motivation: I have two objects, A and B. I know that A has to be instantiated before B, maybe because B needs information calculated by A. Yet, I want to destruct A before B. Maybe I am writing an integration test, and I want server A to shut-down first. How do I accomplish that?
A a{};
B b{a.port()};
// delete A, how?
Solution: Don't allocate A on the stack. Instead, use std::make_unique and keep a stack-allocated smart pointer to a heap-allocated instance of A. That way is the least messy option, IMO.
auto a = std::make_unique<A>();
B b{a->port()};
// ...
a.reset()
Alternatively, I considered moving the destruction logic out of A's destructor and calling that method explicitly myself. The destructor would then call it only if it has not been called previously.
First of all, using delete for anything allocated with new[] is undefined behaviour according to C++ standard.
In Visual C++ 7 such pairing can lead to one of the two consequences.
If the type new[]'ed has trivial constructor and destructor VC++ simply uses new instead of new[] and using delete for that block works fine - new just calls "allocate memory", delete just calls "free memory".
If the type new[]'ed has a non-trivial constructor or destructor the above trick can't be done - VC++7 has to invoke exactly the right number of destructors. So it prepends the array with a size_t storing the number of elements. Now the address returned by new[] points onto the first element, not onto the beginning of the block. So if delete is used it only calls the destructor for the first element and the calls "free memory" with the address different from the one returned by "allocate memory" and this leads to some error indicaton inside HeapFree() which I suspect refers to heap corruption.
Yet every here and there one can read false statements that using delete after new[] leads to a memory leak. I suspect that anything size of heap corruption is much more important than a fact that the destructor is called for the first element only and possibly the destructors not called didn't free heap-allocated sub-objects.
How could using delete after new[] possibly lead only to a memory leak on some C++ implementation?
Suppose I'm a C++ compiler, and I implement my memory management like this: I prepend every block of reserved memory with the size of the memory, in bytes. Something like this;
| size | data ... |
^
pointer returned by new and new[]
Note that, in terms of memory allocation, there is no difference between new and new[]: both just allocate a block of memory of a certain size.
Now how will delete[] know the size of the array, in order to call the right number of destructors? Simply divide the size of the memory block by sizeof(T), where T is the type of elements of the array.
Now suppose I implement delete as simply one call to the destructor, followed by the freeing of the size bytes, then the destructors of the subsequent elements will never be called. This results in leaking resources allocated by the subsequent elements. Yet, because I do free size bytes (not sizeof(T) bytes), no heap corruption occurs.
The fairy tale about mixing new[] and delete allegedly causing a memory leak is just that: a fairy tale. It has absolutely no footing in reality. I don't know where it came from, but by now it acquired a life of its own and survives like a virus, propagating by the word of mouth from one beginner to another.
The most likely rationale behind this "memory leak" nonsense is that from the innocently naive point of view the difference between delete and delete[] is that delete is used to destroy just one object, while delete[] destroys an array of objects ("many" objects). A naive conclusion that is usually derived from this is that the first element of the array will be destroyed by delete, while the rest will persist, thus creating the alleged "memory leak". Of course, any programmer with at least basic understanding of typical heap implementations would immediately understand that the most likely consequence of that is heap corruption, not a "memory leak".
Another popular explanation for the naive "memory leak" theory is that since the wrong number of destructors gets called, the secondary memory owned by the objects in the array does not get deallocated. This might be true, but it is obviously a very forced explanation, which bears little relevance in the face of much more serious problem with heap corruption.
In short, mixing different allocation functions is one of those error that lead to solid, unpredictable and very practical undefined behavior. Any attempts to impose some concrete limits on the manifestations of this undefined behavior are just waste of time and sure sign of the lack of basic understanding.
Needless to add, new/delete and new[]/delete[] are in fact two independent memory management mechanisms, which are independently customizable. Once they get customized (by replacing raw memory management functions) there's absolutely no way to even begin to predict what might happen if they get mixed.
It seems that your question is really "why heap corruption doesn't happen?". The answer to that one is "because the heap manager keeps track of allocated block sizes". Let's go back to C for a minute: if you want to allocate a single int in C you would do int* p = malloc(sizeof(int)), if you want to allocate array of size n you can either write int* p = malloc(n*sizeof(int)) or int* p = calloc(n, sizeof(int)). But in any case you'll free it by free(p), no matter how you allocated it. You never pass size to free(), free() just "knows" how much to free, because the size of a malloc()-ed block is saved somewhere "in front" of the block. Back to C++, new/delete and new[]/delete[] are usually implemented in terms of malloc (although they don't have to be, you shouldn't rely on that). This is why new[]/delete combination doesn't corrupt the heap - delete will free the right amount of memory, but, as explained by everyone before me, you can get leaks by not calling the right number of destructors.
That said, reasoning about undefined behavior in C++ is always pointless exercise. Why does it matter if new[]/delete combination happens to work, "only" leaks or causes heap corruption? You shouldn't code like that, period! And, in practice, I would avoid manual memory management whenever possible - STL & boost are there for a reason.
If the non-trivial destructor that are not called for all but the first element in the array are supposed to free some memory you get a memory leak as these objects are not cleaned up properly.
It will lead to a leak in ALL implementations of C++ in any case where the destructor frees memory, because the destructor never gets called.
In some cases it can cause much worse errors.
memory leak might happen if new() operator is overridden but new[] is not. same goes to the delete / delete[] operator
Apart from resulting in undefined behavior, the most straightforward cause of leaks lies in the implementation not calling the destructor for all but the first object in the array. This will obviously result in leaks if the objects have allocated resources.
This is the simplest possible class I could think of resulting in this behaviour:
struct A {
char* ch;
A(): ch( new char ){}
~A(){ delete ch; }
};
A* as = new A[10]; // ten times the A::ch pointer is allocated
delete as; // only one of the A::ch pointers is freed.
PS: note that constructors fail to get called in lots of other programming mistakes, too: non-virtual base class destructors, false reliance on smart pointers, ...
Late for an answer, but...
If your delete mechanism is simply to call the destructor and put the freed pointer, together with the size implied by sizeof, onto a free stack, then calling delete on a chunk of memory allocated with new[] will result memory being lost -- but not corruption.
More sophisticated malloc structures could corrupt on, or detect, this behaviour.
Why can't the answer be that it causes both?
Obviously memory is leaked whether heap corruption occurs or not.
Or rather, since I can re-implement new and delete..... can't it not cause anything at all. Technically I can cause new and delete to perform new[] and delete[].
HENCE: Undefined Behavior.
I was answering a question which was marked off as a duplicate, so i'll just copy it here in case it metters. It was said well before me the way memory allocation works, i`ll just explain the cause & effects.
Just a little thing right off google: http://en.cppreference.com/w/cpp/memory/new/operator_delete
Anyhow, delete is a function for a single object. It frees the instance from the pointer, and leaves;
delete[] is a function used in order to deallocate arrays. That means, it doesnt just free the pointer; It declares the whole memory block of that array as garbage.
That's all cool in practice, but you tell me your application works. You are probably wondering... why?
The solution is C++ does not fix memory leaks. If you`ll use delete without the parenthesises, it'll delete just the array as an object - a proccess which might cause a memory leak.
cool story, memory leak, why should i care?
Memory leak happens when allocated memory doesn't get deleted. That memory then requires unneccessary disk-space, which will make you lose useful memory for pretty much no reason. That's bad programming, and you should probably fix it in your systems.