This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get memory block length after malloc?
If I have a pointer, is it possible to learn how many bytes were allocated by new?
When I googled I found a solution for Windows: _msize() and for Mac: malloc_size(). But nothing for Linux.
And if not, does anybody know why is it hidden from a programmer? delete should definitely know such info.
Update:
As far as I know, if I have this code:
class A {
~A() {}
int m_a;
};
class B : public A {
~B() {}
int m_b;
};
int main() { A * b = new B(); delete b; return 0; }
Destructor of A will be called, but still all the memory allocated by new will be freed.
This means that it can be somehow computed knowing only the pointer. So what is the reason for hiding it from a programmer?
Unfortunately, there is no portable way of obtaining the number of bytes allocated by new and malloc. There are a number of reason why this is the case:
On some platforms, delete and free do nothing at all. As such, they don't need to store size information. This is surprisingly common in embedded platforms; it lets you use C or C++ code written for other platforms unchanged, as long as you don't do too much allocation.
Even on more common platforms, the system may allocate a different number of bytes than you ask for. Typically your allocation will be aligned to some larger size - possibly much larger than your original request. The storage metadata might also be stored in a very slow data structure - you wouldn't want to be taking locks and accessing a hash table in time-critical code.
As portable languages, C and C++ can't offer a feature that won't be available (or well-defined, or reasonably fast) on every platform. That's why this is not available on C++. That said, you don't need this - C++ offers std::vector, which does track the size of your allocation, or std::string which takes care of all of those details for you.
new, malloc, calloc and all the other heap related allocations in the language (yes, there are many more than those) will allocate at least the amount of memory you requested. They may allocate more (and in general they will allocate more).
There is no portable way to know how much they allocated. In fact there is no way at all unless you know exactly what heap manager you are using.
You also need to distinguish allocated memory in the sense of memory that you may access safely from the returned pointer (that's what malloc_size returns on macs and probably what _msize returns on windows) from actual memory that is 'taken away from the heap' because of the allocation (which includes bookkeeping information which may or may not be adjacent to the memory block you allocated and may or may not be the same for same-sized allocations).
Q: So can I query the malloc package to find out how big an allocated block is?
A: Unfortunately, there is no standard or portable way. (Some compilers provide nonstandard extensions.) If you need to know, you'll have to keep track of it yourself.
C-FAQ
What the new operator does is it invokes a constructor so the size of the allocation depends on the type whose constructor you're invoking.
E.g.
class A
{
private:
int* x;
public:
A() { x = new int [100]; }
};
Will allocate sizeof(int) * 100 but you cannot know that if the implementation of the A is hidden from you.
If you perform yourself:
int * x = new int [100];
Then you know how much you have allocated because of have access to sizeof(primitive).
Furthermore the delete operator invokes a destructor so for complex objects it again does not need to know the size of the allocated memory as the responsibility for freeing the memory fully and correctly is entirely delegated to the programmer.
So there isn't a straight forward answer here.
In addition to the answers above: In some situations, the size that have to be allocated and deallocated are known at compile time and it would be a complete waist of memory, to record the size somewhere.
In cases where the static type is equal to the dynamic type, the memory to be deallocated can be determined by the type.
In cases, where the static type is not equal to the dynamic type, the deleted objects class has to have a virtual destructor. This destructor can be used to deallocate the right size of memory.
When allocating an array, the size of the array is usually attached to that array in a implementation dependent manner and the size to be deallocated can be determined by the type of the elements and the size of the array.
X x=new X()
here it depend on the size of class i.e. the number of variables class contains.
int x = new int [100];
here it depend on how many elements u r going to allocate.suppose, int takes 2 byte,then here it takes 200 bytes.
shortly, we can say that, it depend on the data type, for which u r using new operator
Related
When I allocate a dynamic array in C++ (T * p = new T[n]), I use delete [] p to free the allocated memory. Obviously, the system knows the array size (in order among other things to call n times T's destructor). This is discussed elsewhere. For instance How does delete[] “know” the size of the operand array?. This is implemenation details.
But why was it not decided to make this information available?
Thx
delete[] might not necessarily know the exact array size. It might over-allocate for example, or do something else entirely bizarre yet conformant with the standard.
Facetiously the answer could also be on the lines that nobody has managed to convince the standards committee of the merits of the idea; perhaps sizeof[](p) could be the proposed syntax? sizeof is already a keyword, already has a runtime-evaluable flavour in C so it's not an enormous leap to envisage an equivalent in C++, and my [] distinguishes from sizeof(pointer type).
It would inhibit optimisations where knowledge of array size is not nessesary:
int* foo = new int[...];
. . .
delete[] foo;
As int is a trivial type and does not have a destructor, compiler do not need to know how many ints are there. Even if it is an array of 1 int in 4 MB memory chunk.
const MyType* const foo = new MyType[. . .];
. . .
delete[] foo;
Here compiler knows size of the array, pointed by foo, and it knows, that it cannot change legally. So it just can use that information directly, and do not store amount of items in allocated array.
Because new is not the only source for dynamic array (which share the same signature with pointer to objects), it has to be compatible with C.
Think about this:
void seem_good(MyStruct* d) {
mess_with(d[3]);
}
The compiler have no way to perform check if it wanted to be called from other languages.
By the way, when inter-operation with other languages is not needed, C++ has its own solution: std::array.
It's implementation-specific. One possible way the runtime tracks the size is to store it in memory just before the returned pointer, but its not something you can count on. I was once tasked with tracking (heap) memory usage so I did exactly this - wrote a custom allocator that allocated some extra bytes, stored the size and returned an offset pointer. When you delete you need to "unoffset" the pointer, do what you need to with the size, then delete.
This is something I've been wondering for a while and never found an answer for:
Why is it that when you allocate something on the heap you cannot determine the size of it from just the pointer, yet you can delete it using just the pointer and somehow C++ knows how many bytes to free?
Does this have something to do with the way it is stored on the heap?
Is this information there but not exposed by C++?
And perhaps this should be a separate question but I think it's pretty related so I'll ask it here:
Why is it a dynamic array of elements must be deleted using delete [] as opposed to just the simple delete command; why does C++ need this additional information to correctly free all the memory?
When an allocation is made, a small section of memory immediately before [or, technically, somewhere completely different, but just before is the most common scenario] will store the size of the allocation, and in the case of new [] also store the number of allocated objects.
Note that the C++ standard doesn't give any way to retrieve this information for a reason: It may not accurately describe what is allocated, for example the size of an array may very well be rounded up to some "nice" boundary [almost all modern allocators round to 16 bytes at the very least, so that the memory is usable for SSE and other similar SIMD implementations on other processor architectures]. So if you allocated 40 bytes, it would report back 48, which isn't what you asked for, so it would be rather confusing. And of course, there is no guarantee that the information is stored at ALL - it may be implied by some other information that is stored in the "admin" block of the allocation.
And of course, you can use placement new, in which case there is no admin block, and the allocation is not deleted in the normal fashion - some arbitrary code wouldn't be able to tell the difference.
delete differs from delete [] in that delete [] will know how many objects have been allocated, and call the destructor for all of those objects. It is also possible [or even likely] that new [] stores the number of elements in a way that means that calling delete [] on something that wasn't created with new [] will go horribly wrong.
And as Zan Lynx commented, that if there is no destructor for the objects (e.g. when you are allocating data for int or struct { int x; double y; }, etc - including classes that don't have a constructor [note however that if you have another class inside the class, the compiler will build a destructor for you]), then there is no need to store the count, or do anything else, so the compiler CAN, if it wishes, optimise this sort of allocation into regular new and delete.
In "The C++ Programming Language" book Stroustrup says:
"To deallocate space allocated by new, delete and delete[] must be able to determine the size of the object allocated. This implies that an object allocated using the standard implementation of new will occupy slightly more space than a static object. Typically, one word is used to hold the object’s size.
That means every object allocated by new has its size located somewhere in the heap. Is the location known and if it is how can I access it?
In actual fact, the typical implementation of the memory allocators store some other information too.
There is no standard way to access this information, in fact there is nothing in the standard saying WHAT information is stored either (the size in bytes, number of elements and their size, a pointer to the last element, etc).
Edit:
If you have the base-address of the object and the correct type, I suspect the size of the allocation could be relatively easily found (not necessarily "at no cost at all"). However, there are several problems:
It assumes you have the original pointer.
It assumes the memory is allocated exactly with that runtime library's allocation code.
It assumes the allocator doesn't "round" the allocation address in some way.
To illustrate how this could go wrong, let's say we do this:
size_t get_len_array(int *mem)
{
return allcoated_length(mem);
}
...
void func()
{
int *p = new int[100];
cout << get_len_array(p);
delete [] p;
}
void func2()
{
int buf[100];
cout << get_len_array(buf); // Ouch!
}
That means every object allocated by new has its size located somewhere in the heap. Is the location known and if it is how can I access it?
Not really, that is not needed for all cases. To simplify the reasoning, there are two levels at which the sizes could be needed. At the language level, the compiler needs to know what to destroy. At the allocator level, the allocator needs to know how to release the memory given only a pointer.
At the language level, only the array versions new[] and delete[] need to handle any size. When you allocate with new, you get a pointer with the type of the object, and that type has a given size.
To destroy the object the size is not needed. When you delete, either the pointer is to the correct type, or the static type of the pointer is a base and the destructor is virtual. All other cases are undefined behavior, and thus can be ignored (anything can happen). If it is the correct type, then the size is known. If it is a base with a virtual destructor, the dynamic dispatch will find the final overrider, and at that point the type is known.
There could be different strategies to manage this, the one used in the Itanium C++ ABI (used by multiple compilers in multiple platforms, although not Visual Studio) for example generates up to 3 different destructors per type, one of them being a version that takes care of releasing the memory, so although delete ptr is defined in terms of calling the appropriate destructor and then releasing the memory, in this particular ABI delete ptr call a special destructor that both destroys and releases the memory.
When you use new[] the type of the pointer is the same regardless of the number of elements in the dynamic array, so the type cannot be used to retrieve that information back. A common implementation is allocating an extra integral value and storing the size there, followed by the real objects, then returning a pointer to the first object. delete[] would then move the received pointer one integer back, read the number of elements, call the destructor for all of them and then release the memory (pointer retrieved by the allocator, not the pointer given to the program). This is really only needed if the type has a non-trivial destructor, if the type has a trivial destructor, the implementation does not need to store the size and you can avoid storing that number.
Out of the language level, the real memory allocator (think of malloc) needs to know how much memory was allocated so that the same amount can be released. In some cases that can be done by attaching the metadata to the memory buffer in the same way that new[] stores the size of the array, by acquiring a larger block, storing the metadata there and returning a pointer beyond it. The deallocator would then undo the transformation to get to the metadata.
This is, on the other hand, not always needed. A common implementation for allocators of small size is to allocate pages of memory to form pools from which the small allocations are then obtained. To make this efficient, the allocator considers only a few different sizes, and allocations that don't fit one of the sizes exactly are bumped to the next size. If you request, for example, 65 bytes, the allocator might actually give you 128 bytes (assuming pools of 64 and 128 bytes). Thus given one of the larger blocks managed by the allocator, all pointers that were allocated from it have the same size. The allocator can then find the block from which pointer was allocated and infer the size from it.
Of course, this is all implementation details that are not accessible to the C++ program in a standard portable way, and the exact implementation can differ not just based on the program, but also de execution environment. If you are interested in knowing how the information is really kept in your environment, you might be able to find the information, but I would think twice before trying to use it for anything other than learning purposes.
Your are not deleting a object directly, instead you send a pointer to delete operator.
Reference C++
You use delete by following
it with a pointer to a block of memory originally allocated with new:
int * ps = new int; // allocate memory with new
. . . // use the memory
delete ps; // free memory with delete when done
This removes the memory to which ps points; it doesn’t remove the pointer ps itself.
You can reuse ps, for example, to point to another new allocation
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
( POD )freeing memory : is delete[] equal to delete ?
Does delete deallocate the elements beyond the first in an array?
char *s = new char[n];
delete s;
Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array?
For more complex types, would delete call the destructor of objects beyond the first one?
Object *p = new Object[n];
delete p;
How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region? What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more.
For primitive types, such as char, int, is there any difference between:
int *p = new int[n];
delete p;
delete[] p;
free p;
Except for the routes taken by the respective calls through the delete->free deallocation machinery?
It's undefined behaviour (most likely will corrupt heap or crash the program immediately) and you should never do it. Only free memory with a primitive corresponding to the one used to allocate that memory.
Violating this rule may lead to proper functioning by coincidence, but the program can break once anything is changed - the compiler, the runtime, the compiler settings. You should never rely on such proper functioning and expect it.
delete[] uses compiler-specific service data for determining the number of elements. Usually a bigger block is allocated when new[] is called, the number is stored at the beginning and the caller is given the address behind the stored number. Anyway delete[] relies on the block being allocated by new[], not anything else. If you pair anything except new[] with delete[] or vice versa you run into undefined behaviour.
Read the FAQ: 16.3 Can I free() pointers allocated with new? Can I delete pointers allocated with malloc()?
Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array?
Yes it does.
How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region?
The compiler needs to know. See FAQ 16.11
Because the compiler stores that information.
What I mean is the compiler needs different deletes to generate appropriate book-keeping code. I hope this is clear now.
Yes, this is dangerous!
Dont do it!
It will lead to programm crashes or even worse behavior!
For objects allocated with new you MUST use delete;
For objects allocated with new [] you MUST use delete [];
For objects allocated with malloc() or calloc() you MUST use free();
Be aware also that for all these cases its illegal to delete/free a already deleted/freed pointer a second time. free may also NOT be called with null. calling delete/delete[] with NULL is legal.
Yes, there's a real practical danger. Even implementation details aside, remember that operator new/operator delete and operator new[]/operator delete[] functions can be replaced completely independently. For this reason, it is wise to think of new/delete, new[]/delete[], malloc/free etc. as different, completely independent methods of memory allocaton, which have absolutely nothing in common.
Raymond Chen (Microsoft developer) has an in-depth article covering scaler vs. vector deletes, and gives some background to the differences. See:
http://blogs.msdn.com/oldnewthing/archive/2004/02/03/66660.aspx
Does delete deallocate the elements
beyond the first in an array?
No. delete will deallocate only the first element regardless on which compiler you do this. It may work in some cases but that's co-incidental.
Does it matter in the above case seeing as all the elements of s are allocated
contiguously, and it shouldn't be possible to delete only a portion of the array?
Depends on how the memory is marke as free. Again implementation dependant.
For more complex types, would delete call the destructor of objects beyond the first one?
No. Try this:
#include <cstdio>
class DelTest {
static int next;
int i;
public:
DelTest() : i(next++) { printf("Allocated %d\n", i); }
~DelTest(){ printf("Deleted %d\n", i); }
};
int DelTest::next = 0;
int main(){
DelTest *p = new DelTest[5];
delete p;
return 0;
}
How can delete[] deduce the number of
Objects beyond the first, wouldn't
this mean it must know the size of the
allocated memory region?
Yes, the size is stored some place. Where it is stored depends on implementation. Example, the allocator could store the size in a header preceding the allocated address.
What if the memory region was
allocated with some overhang for
performance reasons? For example one
could assume that not all allocators
would provide a granularity of a
single byte. Then any particular
allocation could exceed the required
size for each element by a whole
element or more.
It is for this reason that the returned address is made to align to word boundaries. The "overhang" can be seen using the sizeof operator and applies to objects on the stack as well.
For primitive types, such as char, int, is there any difference between ...?
Yes. malloc and new could be using separate blocks of memory. Even if this were not the case, it's a good practice not to assume they are the same.
It's undefined behavior. Hence, the anser is: yes, there could be danger. And it's impossible to predict exactly what will trigger problems. Even if it works one time, will it work again? Does it depend on the type? Element count?
For primitive types, such as char, int, is there any difference between:
I'd say you'll get undefined behaviour. So you shouldn't count on stable behaviour. You should always use new/delete, new[]/delete[] and malloc/free pairs.
Although it might seem in some logic way that you can mix new[] and free or delete instead of delete[], this is under the assumption about the compiler being a fairly simplistic, i.e., that it will always use malloc() to implement the memory allocation for new[].
The problem is that if your compiler has a smart enough optimizer it might see that there is no "delete[]" corresponding to the new[] for the object you created. It might therefore assume that it can fetch the memory for it from anywhere, including the stack in order to save the cost of calling the real malloc() for the new[]. Then when you try to call free() or the wrong kind of delete on it, it is likely to malfunction hard.
Step 1 read this: what-is-the-difference-between-new-delete-and-malloc-free
You are only looking at what you see on the developer side.
What you are not considering is how the std lib does memory management.
The first difference is that new and malloc allocate memroy from two different areas in memory (New from FreeStore and malloc from Heap (Don't focus on the names they are both basically heaps, those are just there official names from the standard)). If you allocate from one and de-allocate to the other you will messs up the data structures used to manage the memory (there is no gurantee they will use the same structure for memory management).
When you allocate a block like this:
int* x= new int; // 0x32
Memory May look like this: It probably wont since I made this up without thinking that hard.
Memory Value Comment
0x08 0x40 // Chunk Size
0x16 0x10000008 // Free list for Chunk size 40
0x24 0x08 // Block Size
0x32 ?? // Address returned by New.
0x40 0x08 // Pointer back to head block.
0x48 0x0x32 // Link to next item in a chain of somthing.
The point is that there is a lot more information in the allocated block than just the int you allocated to handle memory management.
The standard does not specify how this is done becuase (in C/C++ style) they did not want to inpinge on the compiler/library manufacturers ability to implement the most effecient memory management method for there architecture.
Taking this into account you want the manufacturer the ability to distinguish array allocation/deallocation from normal allocation/deallocation so that it is possable to make it as effecient as possable for both types independantly. As a result you can not mix and match as internally they may use different data structures.
If you actually analyse the memory allocation differences between C and C++ applications you find that they are very different. And thus it is not unresonable to use completely different techniques of memory management to optimise for the application type. This is another reason to prefer new over malloc() in C++ as it will probably be more effecient (The more important reason though will always be to reducing complexity (IMO)).
I have an array, called x, whose size is 6*sizeof(float). I'm aware that declaring:
float x[6];
would allocate 6*sizeof(float) for x in the stack memory. However, if I do the following:
float *x; // in class definition
x = new float[6]; // in class constructor
delete [] x; // in class destructor
I would be allocating dynamic memory of 6*sizeof(float) to x. If the size of x does not change for the lifetime of the class, in terms of best practices for cleanliness and speed (I do vaguely recall, if not correctly, that stack memory operations are faster than dynamic memory operations), should I make sure that x is statically rather than dynamically allocated memory? Thanks in advance.
Declaring the array of fixed size will surely be faster. Each separate dynamic allocation requires finding an unoccupied block and that's not very fast.
So if you really care about speed (have profiled) the rule is if you don't need dynamic allocation - don't use it. If you need it - think twice on how much to allocate since reallocating is not very fast too.
Using an array member will be cleaner (more succinct, less error prone) and faster as there is no need to call allocation and deallocation functions. You will also tend to improve 'locality of reference' for the structure being allocated.
The two main reasons for using dynamically allocated memory for such a member are where the required size is only known at run time, or where the required size is large and it is known that this will have a significant impact on the available stack space on the target platform.
TBH data on the stack generally sits in the cache and hence it is faster. However if you dynamically allocate something once and then use it regularly it will also be cached and hence pretty much as fast.
The important thing is to avoid allocating and deallocating regularly (ie each time a function is called). If you justa void doing regular allocation and deallocations (ie allocate and deallocate once only) then a stack and heap allocated array will preform pretty much as quickly as each other.
Yes, declaring the array statically will perform faster.
This is very easy to test, just write a simple wrapping loop to instantiate X number of these objects. You can also step through the machine code and see the larger number of OPCODEs required to dynamically allocate the memory.
Static allocation is faster (no need to ask to memory ) and there's no way you will forget to delete it or delete it with incorrect delete operator (delete instead of delete[]).
Construction an usage of dynamic/heap data is consists of the following steps:
ask for memory to allocate the objects (calling to new operator). If no memory a new operator will throw bad_alloc exception.
creating the objects with default constructor (also done by new)
release the memory by user (by delete/delete[] operator) - delete will call
to object destructor. Here a user can do a lot of mistakes:
forget to call to delete - this will lead to memory leak
call to not correct delete operator (e.g. delete instead of delete[]) - bad things will happen
call to delete twice - bad things can happen
When using static objects/array of objects, there's no need to allocate memory and release it by user. This makes code simpler and less error-prone.
So to the conclusion, if you know your size on the array on at compilation time and you don't matter about memory (maybe at runtime I'll use not entries in the array), static array is obviously preferred one.
For dynamic allocated data it worth looking for smart pointers (here)
Don't confuse the following cases:
int global_x[6]; // an array with static storage duration
struct Foo {
int *pointer_x; // a pointer member in instance data
int member_x[6]; // an array in instance data
Foo() {
pointer_x = new int[6]; // a heap-allocated array
}
~Foo() { delete[] pointer_x; }
};
int main() {
int auto_x[6]; // an array on the stack (automatic variable)
Foo auto_f; // a Foo on the stack
Foo *dyn_f = new Foo(); // a heap-allocated Foo.
}
Now:
auto_f.member_x is on the stack, because auto_f is on the stack.
(*dyn_f).member_x is on the heap, because *dyn_f is on the heap.
For both Foo objects, pointer_x points to a heap-allocated array.
global_x is in some data section which the OS or runtime creates each time the program is run. This may or may not be from the same heap as dynamic allocations, it doesn't usually matter.
So regardless of whether it's on the heap or not, member_x is a better bet than pointer_x in the case where the length is always 6, because:
It's less code and less error-prone.
Your object only needs a single allocation if the object is heap-allocated, instead of 2.
Your object requires no heap allocations if the object is on the stack.
It uses less memory in total, because of fewer allocations, and also because there's no need for storage for the pointer value.
Reasons to prefer pointer_x:
If you need to reallocate during the lifetime of the object.
If different objects will need a different size array (perhaps based on constructor parameters).
If Foo objects will be placed on the stack, but the array is so large that it won't fit on the stack. For instance if you've got 1MB of stack, then you can't use automatic variables which contain an int[262144].
Composition is more efficient, being faster, lower memory overhead and less memory fragmentation.
You could do something like this:
template <int SZ = 6>
class Whatever {
...
float floats[SZ];
};
Use the stack allocated memory whenever possible. It will save you from the headaches of deallocating the memory, fragmentation of your virtual address space etc. Also, it is faster compared to the dynamic memory allocation.
There are more variables at play here:
The size of the array vs. the size of the stack: stack sizes are quite small compared to the free store (e.g. 1MB upto 30MB). Large chunks on the stack will cause stack overflow
The number of arrays you need: large number of small arrays
The lifetime of the array: if it's only needed locally inside a function, the stack is very convenient. If you need it after the function has exited, you must allocate it on the heap.
Garbage collection: if you allocate it on the heap, you need to clean it up manually, or have some flavour of smart pointers do the work for you.
As mentioned in another reply, large objects can not be allocated on the stack because you are not sure what is the stack size. In interests of portability, large objects or objects with variable sizes should always be allocated on the heap.
There has been a lot of development in the malloc/new routines now provided by the operating system (for example, Solaris's libumem). Dynamic memory allocation is often not a bottleneck.
If yo allocate the arraty statically, there will only be one instance of it. The point of using a class is that you want multiple instances. There is no need to allocate the array dynamically at all:
class A {
...
private:
float x[8];
};
is what you want.