This question already has answers here:
How does delete[] "know" the size of the operand array?
(9 answers)
How does delete[] know it's an array?
(16 answers)
Closed 9 years ago.
int* i = new int[4];
delete[] i;
While we call delete[], how does the program know "i" is 4 byte-length. Is 4 be stored in somewhere in memory?
The implementation of delete[] depend on System or Compilers?
Is there some System API to get the length of i?
As HadeS said, which will hold the information how much memory has been allocated? And where?
It must be hold in memory, or maybe nearby the pointer i.
First off, i is not "4-byte length". Rather, i is a pointer to an array of four ints.
Next, delete[] doesn't need to know anything, because int has no destructor. All that has to happen is that the memory needs to be freed, which is done by the system's allocator. This is the same situation as with free(p) -- you don't need to tell free how much memory needs to be freed, since you expect it to figure that out.
The situation is different when destructors need to be called; in that case, the C++ implementation does indeed need to remember the number of C++ objects separately. The method for this is up to the implementation, although many compilers follow the popular Itanium ABI, which allows linking together of object code compiled by those different compilers.
There is no way for you to query this information. You should consider dynamic arrays a misfeature of C++; there is essentially no reason to use them*, and you can always do better with some kind of class that manages memory and object separately and individually: Since you'll have to remember the number of array elements anyway, it's much better to encapsulate the size and the allocation in one coherent class, rather than have vague dynamic arrays that you cannot really use without passing extra information along anyway (unless you had self-terminating semantics, but then you'd just be using the extra space for the terminator).
*) And there are at least two standard defects about dynamic arrays that nobody is too bothered to worry about fixing
When you dynamically allocate a memory; compiler allocates an extra block of memory apart from what you have asked, which will hold the information how much memory has been allocated.
when you try to delete this memory using delete this extra block of memory will be read by the compiler to see how much memory was allocated and free the space accordingly.
I don't think there is any API which will fetch this information.
Related
This is something I've been wondering for a while and never found an answer for:
Why is it that when you allocate something on the heap you cannot determine the size of it from just the pointer, yet you can delete it using just the pointer and somehow C++ knows how many bytes to free?
Does this have something to do with the way it is stored on the heap?
Is this information there but not exposed by C++?
And perhaps this should be a separate question but I think it's pretty related so I'll ask it here:
Why is it a dynamic array of elements must be deleted using delete [] as opposed to just the simple delete command; why does C++ need this additional information to correctly free all the memory?
When an allocation is made, a small section of memory immediately before [or, technically, somewhere completely different, but just before is the most common scenario] will store the size of the allocation, and in the case of new [] also store the number of allocated objects.
Note that the C++ standard doesn't give any way to retrieve this information for a reason: It may not accurately describe what is allocated, for example the size of an array may very well be rounded up to some "nice" boundary [almost all modern allocators round to 16 bytes at the very least, so that the memory is usable for SSE and other similar SIMD implementations on other processor architectures]. So if you allocated 40 bytes, it would report back 48, which isn't what you asked for, so it would be rather confusing. And of course, there is no guarantee that the information is stored at ALL - it may be implied by some other information that is stored in the "admin" block of the allocation.
And of course, you can use placement new, in which case there is no admin block, and the allocation is not deleted in the normal fashion - some arbitrary code wouldn't be able to tell the difference.
delete differs from delete [] in that delete [] will know how many objects have been allocated, and call the destructor for all of those objects. It is also possible [or even likely] that new [] stores the number of elements in a way that means that calling delete [] on something that wasn't created with new [] will go horribly wrong.
And as Zan Lynx commented, that if there is no destructor for the objects (e.g. when you are allocating data for int or struct { int x; double y; }, etc - including classes that don't have a constructor [note however that if you have another class inside the class, the compiler will build a destructor for you]), then there is no need to store the count, or do anything else, so the compiler CAN, if it wishes, optimise this sort of allocation into regular new and delete.
This question already has answers here:
Why should C++ programmers minimize use of 'new'?
(19 answers)
Closed 9 years ago.
Say I have two sets of code,
std::vector<float>v1;
and
std::vector<float> *pV2 = new std::vector<float>(10);
What is the difference between the two other than the fact that you will have a larger chunk of memory allocated with the pointer to the vector? Is there an advantage to one vs. the other?
In my mind, it seems like allocating the pointer is just more of a hassle because you have to deal with deallocating it later.
What is the difference between the two other than the fact that you will have a larger chunk of memory allocated with the pointer to the vector?
'will have a larger chunk of memory allocated'
This isn't necessarily true! The std::vector might choose a much larger default initial size for the internally managed data array than 10.
'What is the difference between the two'
The main difference is that the 1st one is allocated on the local scopes stack,
and the 2nd one (usually) goes to the heap. Note: The internally managed data array goes to the heap anyway!!
To ensure proper memory management when you really have to use a std::vector<float>* pointer allocated from the heap, I'd recommend the use of c++ smart pointers, e.g.:
std::unique_ptr<std::vector<float> > pV2(new std::vector<float>(10));
For more details have a look at the documentation of <memory>.
One of the critical differences is scope. In your first example, the vector will probably either be a member of a class, or it will be local to a function. If it's a class member, it will be destroyed when the containing object is destroyed. If it's local to a function, it will be destroyed when the function ends. The object absolutely cannot exist beyond that, so you have to be very careful if you try passing its address to another part of your program.
When you manually allocate something on the heap instead, it will exist for as long as you want. You're in complete control of the deallocation, which means you can create it in one object/function, and use or delete it in another whenever you need to.
It's also quite useful in various situations to be able to delay instantiation of an object until it's actually required. For example, it may need different construction parameters depending on user input, or you may want to take advantage of polymorphism (i.e. decide at runtime which sub-class to instantiate).
Another key difference for some situations is available memory. If you create an object locally to a function, it will reside on the stack. There is a lot less space available on the stack than on the heap, so you can run into difficulties when using particularly large objects (although that won't happen with a vector because it allocates on the heap internally anyway).
It's worth noting that the actual amount of memory used by the object is the same, whether it's on the stack or on the heap. The only difference is that if you manually allocate something on the heap, then you will also have a pointer to it. That's only an extra 4 or 8 bytes though, which is negligible in most cases.
This question already has answers here:
How to find the size of an array (from a pointer pointing to the first element array)?
(17 answers)
Closed 9 years ago.
Is there a function (that could be written) which allows to know the size of an array defined with new:
int *a=new int[3];
*a=4;
*(a+1)=5;
*(a+2)=6;
Thanks!
There is not a standard way get the size of an array allocated with new.
A better approach to array allocation is std::vector, which does have a size() member -- and it automatically cleans up after itself when it goes out of scope.
Short answer: No.
Use std::vector instead.
It would be possible to write a function for this. But in the real world, it's a poor idea.
Although the act of calling new most likely stores the number of elements in the array that is allocated (or at least, the size of the actual allocation underneath it), there is no way that you can get that information in a way that doesn't rely on knowing how new works on your particular system, and that could change if you compile your code differently (e.g. debug or release version of the code), change version of the compiler (or runtime library), etc, etc.
Using the std::vector as mentioned is a much better way, since you then ALSO don't have to worry about freeing your array somewhere else.
If, for some reason, you don't want to [or have been told by your tutor, that you can't] use std::vector, you need to "remember" the size of the allocation.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why [] is used in delete ( delete [] ) to free dynamically allocated array?
Why does C++ still have a delete[] AND a delete operator?
I'm wondering what's their difference and I know the obvious answer some might say, that one is to delete an array and the other is to delete a single object but I'm wondering why should there be two different deletion methods for these two operations? I mean delete is basically implemented using C free method which doesn't care if the pointer is actually pointing toward an array or a single object. The only reason I can think of is two be able to know if it's an array and call destructor for each cell instead of only the first object but that wouldn't also be possible since compiler can not guess the length of array just looking at it's pointer. By the way though it's said to invoke undefined behavior to call delete for memory allocated with new[] I can't imagine anything that could possibly go wrong.
As you have discovered the compiler needs to know the length of an array (at least for non-trivial types) to be able to call destructors for each element. For this new[] typically allocates some extra bytes to record the element count and returns a pointer to the end of this bookkeeping area.
When you use delete[] the compiler will look at the memory before the array to find the count and adjust the pointer, so that the originally allocated block is freed.
If you use delete to destroy a dynamically allocated array, destructors for elements (except the first) won't be called and typically this will end up attempting to free a pointer that doesn't point to the beginning of an allocated block, which may corrupt the heap.
but that wouldn't also be possible since compiler can not guess the
length of array just looking at it's pointer
That's not really true. The compiler itself doesn't need to guess anything, but it does decide which function to call to free the memory based on the operator it sees. There is a separate function dedicated to releasing arrays, and this function does indeed know the length of the array to be freed so it can appropriately call destructors.
It knows the length of the array because typically new[] allocates memory that includes the array length (since this is known on allocation) and returns a pointer to just the "usable" memory allocated. When delete[] is called it knows how to access this memory based on the pointer to the usable part of the array that was given.
When you allocate memory using new[], the compiler not only needs to construct each element, it also needs to keep track of how many elements have been allocated. This is needed for delete[] to work correctly.
Since new and delete operate on scalars, they don't need to do that, and could save on a little bit of overhead.
There is absolutely no requirement for new to be compatible with delete[] and vice versa. Mixing the two is undefined behaviour.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
( POD )freeing memory : is delete[] equal to delete ?
Does delete deallocate the elements beyond the first in an array?
char *s = new char[n];
delete s;
Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array?
For more complex types, would delete call the destructor of objects beyond the first one?
Object *p = new Object[n];
delete p;
How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region? What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more.
For primitive types, such as char, int, is there any difference between:
int *p = new int[n];
delete p;
delete[] p;
free p;
Except for the routes taken by the respective calls through the delete->free deallocation machinery?
It's undefined behaviour (most likely will corrupt heap or crash the program immediately) and you should never do it. Only free memory with a primitive corresponding to the one used to allocate that memory.
Violating this rule may lead to proper functioning by coincidence, but the program can break once anything is changed - the compiler, the runtime, the compiler settings. You should never rely on such proper functioning and expect it.
delete[] uses compiler-specific service data for determining the number of elements. Usually a bigger block is allocated when new[] is called, the number is stored at the beginning and the caller is given the address behind the stored number. Anyway delete[] relies on the block being allocated by new[], not anything else. If you pair anything except new[] with delete[] or vice versa you run into undefined behaviour.
Read the FAQ: 16.3 Can I free() pointers allocated with new? Can I delete pointers allocated with malloc()?
Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array?
Yes it does.
How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region?
The compiler needs to know. See FAQ 16.11
Because the compiler stores that information.
What I mean is the compiler needs different deletes to generate appropriate book-keeping code. I hope this is clear now.
Yes, this is dangerous!
Dont do it!
It will lead to programm crashes or even worse behavior!
For objects allocated with new you MUST use delete;
For objects allocated with new [] you MUST use delete [];
For objects allocated with malloc() or calloc() you MUST use free();
Be aware also that for all these cases its illegal to delete/free a already deleted/freed pointer a second time. free may also NOT be called with null. calling delete/delete[] with NULL is legal.
Yes, there's a real practical danger. Even implementation details aside, remember that operator new/operator delete and operator new[]/operator delete[] functions can be replaced completely independently. For this reason, it is wise to think of new/delete, new[]/delete[], malloc/free etc. as different, completely independent methods of memory allocaton, which have absolutely nothing in common.
Raymond Chen (Microsoft developer) has an in-depth article covering scaler vs. vector deletes, and gives some background to the differences. See:
http://blogs.msdn.com/oldnewthing/archive/2004/02/03/66660.aspx
Does delete deallocate the elements
beyond the first in an array?
No. delete will deallocate only the first element regardless on which compiler you do this. It may work in some cases but that's co-incidental.
Does it matter in the above case seeing as all the elements of s are allocated
contiguously, and it shouldn't be possible to delete only a portion of the array?
Depends on how the memory is marke as free. Again implementation dependant.
For more complex types, would delete call the destructor of objects beyond the first one?
No. Try this:
#include <cstdio>
class DelTest {
static int next;
int i;
public:
DelTest() : i(next++) { printf("Allocated %d\n", i); }
~DelTest(){ printf("Deleted %d\n", i); }
};
int DelTest::next = 0;
int main(){
DelTest *p = new DelTest[5];
delete p;
return 0;
}
How can delete[] deduce the number of
Objects beyond the first, wouldn't
this mean it must know the size of the
allocated memory region?
Yes, the size is stored some place. Where it is stored depends on implementation. Example, the allocator could store the size in a header preceding the allocated address.
What if the memory region was
allocated with some overhang for
performance reasons? For example one
could assume that not all allocators
would provide a granularity of a
single byte. Then any particular
allocation could exceed the required
size for each element by a whole
element or more.
It is for this reason that the returned address is made to align to word boundaries. The "overhang" can be seen using the sizeof operator and applies to objects on the stack as well.
For primitive types, such as char, int, is there any difference between ...?
Yes. malloc and new could be using separate blocks of memory. Even if this were not the case, it's a good practice not to assume they are the same.
It's undefined behavior. Hence, the anser is: yes, there could be danger. And it's impossible to predict exactly what will trigger problems. Even if it works one time, will it work again? Does it depend on the type? Element count?
For primitive types, such as char, int, is there any difference between:
I'd say you'll get undefined behaviour. So you shouldn't count on stable behaviour. You should always use new/delete, new[]/delete[] and malloc/free pairs.
Although it might seem in some logic way that you can mix new[] and free or delete instead of delete[], this is under the assumption about the compiler being a fairly simplistic, i.e., that it will always use malloc() to implement the memory allocation for new[].
The problem is that if your compiler has a smart enough optimizer it might see that there is no "delete[]" corresponding to the new[] for the object you created. It might therefore assume that it can fetch the memory for it from anywhere, including the stack in order to save the cost of calling the real malloc() for the new[]. Then when you try to call free() or the wrong kind of delete on it, it is likely to malfunction hard.
Step 1 read this: what-is-the-difference-between-new-delete-and-malloc-free
You are only looking at what you see on the developer side.
What you are not considering is how the std lib does memory management.
The first difference is that new and malloc allocate memroy from two different areas in memory (New from FreeStore and malloc from Heap (Don't focus on the names they are both basically heaps, those are just there official names from the standard)). If you allocate from one and de-allocate to the other you will messs up the data structures used to manage the memory (there is no gurantee they will use the same structure for memory management).
When you allocate a block like this:
int* x= new int; // 0x32
Memory May look like this: It probably wont since I made this up without thinking that hard.
Memory Value Comment
0x08 0x40 // Chunk Size
0x16 0x10000008 // Free list for Chunk size 40
0x24 0x08 // Block Size
0x32 ?? // Address returned by New.
0x40 0x08 // Pointer back to head block.
0x48 0x0x32 // Link to next item in a chain of somthing.
The point is that there is a lot more information in the allocated block than just the int you allocated to handle memory management.
The standard does not specify how this is done becuase (in C/C++ style) they did not want to inpinge on the compiler/library manufacturers ability to implement the most effecient memory management method for there architecture.
Taking this into account you want the manufacturer the ability to distinguish array allocation/deallocation from normal allocation/deallocation so that it is possable to make it as effecient as possable for both types independantly. As a result you can not mix and match as internally they may use different data structures.
If you actually analyse the memory allocation differences between C and C++ applications you find that they are very different. And thus it is not unresonable to use completely different techniques of memory management to optimise for the application type. This is another reason to prefer new over malloc() in C++ as it will probably be more effecient (The more important reason though will always be to reducing complexity (IMO)).