Suppose I declare an array as int myarray[5]
Or declare it as int*myarray=malloc(5*sizeof(int))
Will both the declarations set equal amount of memory in number of bytes?
Without considering that the former declaration is for the stack and the latter on the heap.
Thank you!
There's a fundamental difference, that may not be apparent in the way you use myarray:
int myarray[5]; declares an array of five integers, and the array is an automatic variable (and it is uninitialized).
int * myarray = malloc(5 * sizeof(int)); declares a variable that is a pointer to an int (also as an automatic variable), and that pointer is initialized with the result of a library call. That library call promises to make the resulting pointer point to a region of memory that's big enough to store five consecutive integers.
Because of pointer arithmetic, array-to-pointer decay and the convention that a[i] is the same as *(a + i), you can use both variables in the same way, namely as myarray[i]. This is of course by design.
If you're looking for a difference, then maybe the following helps: The array-of-five-ints is a single object, and it has a definite size. By contrast, the malloc library call doesn't create any objects. It just sets aside enough memory (and suitably aligned), but it could for example allocate a lot more memory.
(In C++ there's of course the additional distinction between memory and objects.)
Neither is guaranteed to allocate exactly 5*sizeof(int) bytes, though both will give you at least that much space (assuming no allocation failures or stack exhaustion).
In the first case, the stack variable may be surrounded by alignment padding, and/or stack canaries (depending on compile options). These could result in the stack pointer being adjusted by more than 5*sizeof(int) bytes.
In the second case, you allocate a int * on the stack (sizeof(int *) bytes), plus the space that malloc returns. malloc may allocate additional memory in the form of allocation tracking structures, alignment padding, linked-list pointers, etc. Thus, in that case you are also not guaranteed to allocate exactly 5*sizeof(int) bytes.
If you want to be very precise about your memory usage, the mmap function allows you to request pages of virtual memory from the OS. The memory you request this way will be precisely the amount you request (ignoring the space taken up in the kernel to track those allocations).
Short Answer: No normally the latter use a small larger memory.
Long Answer:
Memory management will certainly use some extra memory to manage the returned pointer and be able to track it, and free it at a later time and you declare an extra pointer to point to that memory. So its actual memory is sizeof(int*) + malloc_overhead. But in first case you use exactly 5 int(plus alignment possibly).
Dynamic allocation will require at least a few extra bytes; however many bytes for the pointer variable in addition to the 5 int-sized elements, and potentially some extra bytes to track the size of the allocated region so that it can be freed properly.
Related
I know that there is no way in C++ to obtain the size of a dynamically created array, such as:
int* a;
a = new int[n];
What I would like to know is: Why? Did people just forget this in the specification of C++, or is there a technical reason for this?
Isn't the information stored somewhere? After all, the command
delete[] a;
seems to know how much memory it has to release, so it seems to me that delete[] has some way of knowing the size of a.
It's a follow on from the fundamental rule of "don't pay for what you don't need". In your example delete[] a; doesn't need to know the size of the array, because int doesn't have a destructor. If you had written:
std::string* a;
a = new std::string[n];
...
delete [] a;
Then the delete has to call destructors (and needs to know how many to call) - in which case the new has to save that count. However, given it doesn't need to be saved on all occasions, Bjarne decided not to give access to it.
(In hindsight, I think this was a mistake ...)
Even with int of course, something has to know about the size of the allocated memory, but:
Many allocators round up the size to some convenient multiple (say 64 bytes) for alignment and convenience reasons. The allocator knows that a block is 64 bytes long - but it doesn't know whether that is because n was 1 ... or 16.
The C++ run-time library may not have access to the size of the allocated block. If for example, new and delete are using malloc and free under the hood, then the C++ library has no way to know the size of a block returned by malloc. (Usually of course, new and malloc are both part of the same library - but not always.)
One fundamental reason is that there is no difference between a pointer to the first element of a dynamically allocated array of T and a pointer to any other T.
Consider a fictitious function that returns the number of elements a pointer points to.
Let's call it "size".
Sounds really nice, right?
If it weren't for the fact that all pointers are created equal:
char* p = new char[10];
size_t ps = size(p+1); // What?
char a[10] = {0};
size_t as = size(a); // Hmm...
size_t bs = size(a + 1); // Wut?
char i = 0;
size_t is = size(&i); // OK?
You could argue that the first should be 9, the second 10, the third 9, and the last 1, but to accomplish this you need to add a "size tag" on every single object.
A char will require 128 bits of storage (because of alignment) on a 64-bit machine. This is sixteen times more than what is necessary.
(Above, the ten-character array a would require at least 168 bytes.)
This may be convenient, but it's also unacceptably expensive.
You could of course envision a version that is only well-defined if the argument really is a pointer to the first element of a dynamic allocation by the default operator new, but this isn't nearly as useful as one might think.
You are right that some part of the system will have to know something about the size. But getting that information is probably not covered by the API of memory management system (think malloc/free), and the exact size that you requested may not be known, because it may have been rounded up.
You will often find that memory managers will only allocate space in a certain multiple, 64 bytes for example.
So, you may ask for new int[4], i.e. 16 bytes, but the memory manager will allocate 64 bytes for your request. To free this memory it doesn't need to know how much memory you asked for, only that it has allocated you one block of 64 bytes.
The next question may be, can it not store the requested size? This is an added overhead which not everybody is prepared to pay for. An Arduino Uno for example only has 2k of RAM, and in that context 4 bytes for each allocation suddenly becomes significant.
If you need that functionality then you have std::vector (or equivalent), or you have higher-level languages. C/C++ was designed to enable you to work with as little overhead as you choose to make use of, this being one example.
There is a curious case of overloading the operator delete that I found in the form of:
void operator delete[](void *p, size_t size);
The parameter size seems to default to the size (in bytes) of the block of memory to which void *p points. If this is true, it is reasonable to at least hope that it has a value passed by the invocation of operator new and, therefore, would merely need to be divided by sizeof(type) to deliver the number of elements stored in the array.
As for the "why" part of your question, Martin's rule of "don't pay for what you don't need" seems the most logical.
There's no way to know how you are going to use that array.
The allocation size does not necessarily match the element number so you cannot just use the allocation size (even if it was available).
This is a deep flaw in other languages not in C++.
You achieve the functionality you desire with std::vector yet still retain raw access to arrays. Retaining that raw access is critical for any code that actually has to do some work.
Many times you will perform operations on subsets of the array and when you have extra book-keeping built into the language you have to reallocate the sub-arrays and copy the data out to manipulate them with an API that expects a managed array.
Just consider the trite case of sorting the data elements.
If you have managed arrays then you can't use recursion without copying data to create new sub-arrays to pass recursively.
Another example is an FFT which recursively manipulates the data starting with 2x2 "butterflies" and works its way back to the whole array.
To fix the managed array you now need "something else" to patch over this defect and that "something else" is called 'iterators'. (You now have managed arrays but almost never pass them to any functions because you need iterators +90% of the time.)
The size of an array allocated with new[] is not visibly stored anywhere, so you can't access it. And new[] operator doesn't return an array, just a pointer to the array's first element. If you want to know the size of a dynamic array, you must store it manually or use classes from libraries such as std::vector
Even if I write this statement
char *test= new char[35];
sizeof(test) will always return 4 (or another number depending on the system) rather than 35. I assume that this is because the size of a pointer is strictly the physical "pointing entity" and not the amount of memory reserved for that pointer. Is it correct?
Moreover, is there a way to retrieve the amount of memory reserved for a particular pointer using sizeof()?
Nope.
Pointers are just plain variables (generally implemented as integer addresses) — that they can point to other objects is irrelevant to sizeof. Don't think about them as something "magic", that is somehow intimately bound to what they point to. Pointers are no more than a street number.
I'm bringing this up because of :
[...] the size of a pointer is strictly the physical "pointing entity"
and not the amount of memory reserved for that pointer.
In your line of code :
An array of 35 chars is allocated in dynamic memory
Its first element's address is returned by new
You keep this address in test.
Note that any notion of array, or size thereof, has vanished before the second step. The pointer knows nothing about it. You know.
If you want to retrieve the size of the array, you'll need to keep track of it yourself in a separate variable, or use a class that does it for you, namely std::vector<char>.
I assume that this is because the size of a pointer is strictly the physical "pointing entity" and not the amount of memory reserved for that pointer. Is it correct?
Yes, this is correct; you are taking the sizeof() a pointer. A pointer is an address in memory; on 32-bit systems this will be 4 bytes. 64-bit systems it will be 8 bytes.
Moreover, is there a way to retrieve the amount of memory reserved for a particular pointer using sizeof()?
No. sizeof() knows nothing about what a pointer points at; it's a compile-time calculation. Getting this size will depend how it's been allocated.
In general you should be using std::vector<>. To get the size of a std::vector<>, use std::vector<>::size().
As said in the other answers, from a normal pointer there is no way to know the amount of memory that has been reserved at the point where it is pointing to, because it is still pointing to garbage.
As soon as you fill the memory with a C-String you can get the length with strlen(test) because it looks for the end of string byte (0x0).
A better solution would be to use an array:
char test[35];
szie_t size = sizeof(test); //< returns 35
Allocating stuff on the stack is awesome because than we have RAII and don't have to worry about memory leaks and such. However sometimes we must allocate on the heap:
If the data is really big (recommended) - because the stack is small.
If the size of the data to be allocated is only known at runtime (dynamic allocation).
Two questions:
Why can't we allocate dynamic memory (i.e. memory of size that is
only known at runtime) on the stack?
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable? I.e. Thing t;.
Edit: I know some compilers support Variable Length Arrays - which is dynamically allocated stack memory. But that's really an exception to the general rule. I'm interested in understanding the fundamental reasons for why generally, we can't allocate dynamic memory on the stack - the technical reasons for it and the rational behind it.
Why can't we allocate dynamic memory (i.e. memory of size that is only known at runtime) on the stack?
It's more complicated to achieve this. The size of each stack frame is burned-in to your compiled program as a consequence of the sort of instructions the finished executable needs to contain in order to work. The layout and whatnot of your function-local variables, for example, is literally hard-coded into your program through the register and memory addresses it describes in its low-level assembly code: "variables" don't actually exist in the executable. To let the quantity and size of these "variables" change between compilation runs greatly complicates this process, though it's not completely impossible (as you've discovered, with non-standard variable-length arrays).
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable
This is just a consequence of the syntax. C++'s "normal" variables happen to be those with automatic or static storage duration. The designers of the language could technically have made it so that you can write something like Thing t = new Thing and just use a t all day, but they did not; again, this would have been more difficult to implement. How do you distinguish between the different types of objects, then? Remember, your compiled executable has to remember to auto-destruct one kind and not the other.
I'd love to go into the details of precisely why and why not these things are difficult, as I believe that's what you're after here. Unfortunately, my knowledge of assembly is too limited.
Why can't we allocate dynamic memory (i.e. memory of size that is only known at runtime) on the stack?
Technically, this is possible. But not approved by the C++ standard. Variable length arrays(VLA) allows you to create dynamic size constructs on stack memory. Most compilers allow this as compiler extension.
example:
int array[n];
//where n is only known at run-time
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable? I.e. Thing t;.
We can. Whether you do it or not depends on implementation details of a particular task at hand.
example:
int i;
int *ptr = &i;
We can allocate variable length space dynamically on stack memory by using function _alloca. This function allocates memory from the program stack. It simply takes number of bytes to be allocated and return void* to the allocated space just as malloc call. This allocated memory will be freed automatically on function exit.
So it need not to be freed explicitly. One has to keep in mind about allocation size here, as stack overflow exception may occur. Stack overflow exception handling can be used for such calls. In case of stack overflow exception one can use _resetstkoflw() to restore it back.
So our new code with _alloca would be :
int NewFunctionA()
{
char* pszLineBuffer = (char*) _alloca(1024*sizeof(char));
…..
// Program logic
….
//no need to free szLineBuffer
return 1;
}
Every variable that has a name, after compilation, becomes a dereferenced pointer whose address value is computed by adding (depending on the platform, may be "subtracting"...) an "offset value" to a stack-pointer (a register that contains the address the stack actually is reaching: usually "current function return address" is stored there).
int i,j,k;
becomes
(SP-12) ;i
(SP-8) ;j
(SP-4) ;k
To let this "sum" to be efficient, the offsets have to be constant, so that they can be encode directly in the instruction op-code:
k=i+j;
become
MOV (SP-12),A; i-->>A
ADD A,(SP-8) ; A+=j
MOV A,(SP-4) ; A-->>k
You see here how 4,8 and 12 are now "code", not "data".
That implies that a variable that comes after another requires that "other" to retain a fixed compile-time defined size.
Dynamically declared arrays can be an exception, but they can only be that last variable of a function. Otherwise, all the variables that follows will have an offset that have to be adjusted run-time after that array allocation.
This creates the complication that dereferencing the addresses requires arithmetic (not just a plain offset) or the capability to modify the opcode as variables are declared (self modifying code).
Both the solution becomes sub-optimal in term of performance, since all can break the locality of the addressing, or add more calculation for each variable access.
Why can't we allocate dynamic memory (i.e. memory of size that is only known at runtime) on the stack?
You can with Microsoft compilers using _alloca() or _malloca(). For gcc, it's alloca()
I'm not sure it's part of the C / C++ standards, but variations of alloca() are included with many compilers. If you need aligned allocation, such a "n" bytes of memory starting on a "m" byte boundary (where m is a power of 2), you can allocate n+m bytes of memory, add m to the pointer and mask off the lower bits. Example to allocate hex 1000 bytes of memory on a hex 100 boundary. You don't need to preserve the value returned by _alloca() since it's stack memory and automatically freed when the function exits.
char *p;
p = _alloca(0x1000+0x100);
(size_t)p = ((size_t)0x100 + (size_t)p) & ~(size_t)0xff;
Most important reason is that Memory used can be deallocated in any order but stack requires deallocation of memory in a fixed order i.e LIFO order.Hence practically it would be difficult to implement this.
Virtual memory is a virtualization of memory, meaning that it behaves as the resource it is virtualizing (memory). In a system, each process has a different virtual memory space:
32-bits programs: 2^32 bytes (4 Gigabytes)
64-bits programs: 2^64 bytes (16 Exabytes)
Because virtual space is so big, only some regions of that virtual space are usable (meaning that only some regions can be read/written just as if it were real memory). Virtual memory regions are initialized and made usable through mapping. Virtual memory does not consume resources and can be considered unlimited (for 64-bits programs) BUT usable (mapped) virtual memory is limited and use up resources.
For every process, some mapping is done by the kernel and other by the user code. For example, before even the code start executing, the kernel maps specific regions of the virtual memory space of a process for the code instructions, global variables, shared libraries, the stack space... etc. The user code uses dynamic allocation (allocation wrappers such as malloc and free), or garbage collectors (automatic allocation) to manage the virtual memory mapping at application-level (for example, if there is no enough free usable virtual memory available when calling malloc, new virtual memory is automatically mapped).
You should differentiate between mapped virtual memory (the total size of the stack, the total current size of the heap...) and allocated virtual memory (the part of the heap that malloc explicitly told the program that can be used)
Regarding this, I reinterpret your first question as:
Why can't we save dynamic data (i.e. data whose size is only known at runtime) on the stack?
First, as other have said, it is possible: Variable Length Arrays is just that (at least in C, I figure also in C++). However, it has some technical drawbacks and maybe that's the reason why it is an exception:
The size of the stack used by a function became unknown at compile time, this adds complexity to stack management, additional register (variables) must be used and it may impede some compiler optimizations.
The stack is mapped at the beginning of the process and it has a fixed size. That size should be increased greatly if variable-size-data is going to be placed there by default. Programs that do not make extensive use of the stack would waste usable virtual memory.
Additionally, data saved on the stack must be saved and deleted in Last-In-First-Out order, which is perfect for local variables within functions but unsuitable if we need a more flexible approach.
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable?
As this answer explains, we can.
Read a bit about Turing Machines to understand why things are the way they are. Everything was built around them as the starting point.
https://en.wikipedia.org/wiki/Turing_machine
Anything outside of this is technically an abomination and a hack.
What is the difference between this:
somefunction() {
...
char *output;
output = (char *) malloc((len * 2) + 1);
...
}
and this:
somefunction() {
...
char output[(len * 2) + 1];
...
}
When is one more appropriate than the other?
thanks all for your answers. here is a summary:
ex. 1 is heap allocation
ex. 2 is stack allocation
there is a size limitation on the stack, use it for smaller allocations
you have to free heap allocation, or it will leak
the stack allocation is not accessible once the function exits
the heap allocation is accessible until you free it (or the app ends)
VLA's are not part of standard C++
corrections welcome.
here is some explanation of the difference between heap vs stack:
What and where are the stack and heap?
The first allocates memory on the heap. You have to remember to free the memory, or it will leak. This is appropriate if the memory needs to used outside the function, or if you need to allocate a huge amount of memory.
The second allocates memory on the stack. It will be reclaimed automatically when the function returns. This is the most convenient if you don't need to return the memory to your caller.
Use locals when you only have a small amount of data, and you are not going to use the data outside the scope of the function you've declared it in. If you're going to pass the data around, use malloc.
Local variables are held on the stack, which is much more size limited than the heap, where arrays allocated with malloc go. I usually go for anything > 16 bytes being put on the heap, but you have a bit more flexibility than that. Just don't be allocating locals in the kb/mb size range - they belong on the heap.
The first example allocates a block of storage from the heap. The second one allocates storage from the stack. The difference becomes visible when you return output from somefunction(). The dynamically allocated storage is still available for your use, but the stack-based storage in the second example is, um, nowhere. You can still write into this storage and read it for awhile, until the next time you call a function, at which time the storage will get overwritten randomly with return addresses, arguments, and such.
There's a lot of other weird stuff going on with the code in this question too. First off, it this is a c++ program, you'd want to use new instead of malloc() so you'd say
output = new char[len+1];
And what's with the len*2 + 1 anyway? Maybe this is something special in your code, but I'm guessing you want to allocate unicode characters or multibyte characters. If it's unicode, the null termination takes two bytes as well as each character does, and char is the wrong type, being 8 bit bytes in most compilers. If it's multibyte, then hey, all bets are off.
First some terminology:
The first sample is called heap allocation.
The second sample is called stack allocation.
The general rule is: allocate on the stack, unless:
The required size of the array is unknown at compile time.
The required size exceeds 10% of the total stack size. The default stack size on Windows and Linux is usually 1 or 2 MB. So your local array should not exceed 100,000 bytes.
You tagged your question with both C++ and C, but the second solution is not allowed in C++. Variable length arrays are only allowed in C(99).
If you were to assume 'len' is a constant, both will work.
malloc() (and C++'s 'new') allocate the memory on the heap, which means you have to free() (or if you allocated with 'new', 'delete') the buffer, or the memory will never be reclaimed (leak).
The latter allocates the array on the stack, and will be gone when it goes out of scope. This means that you can't return pointers to the buffer outside the scope it's allocated in.
The former is useful when you want to pass the block of memory around (but in C++, it's best managed with an RAII class, not manually), while the latter is best for small, fixed-size arrays that only need to exist in one scope.
Lastly, you can mark the otherwise stack-allocated array with 'static' to take it off the stack and into a global data section:
static char output[(len * 2) + 1];
This enables you to return pointers to the buffer outside of its scope, however, all calls to such a function will refer to the same piece of global data, so don't use it if you need a unique block of memory every time.
Lastly, don't use malloc in C++ unless you have a really good reason (i.e, realloc). Use 'new' instead, and the accompanying 'delete'.
Let's say I have a pointer allocated to hold 4096 bytes. How would one deallocate the last 1024 bytes in C? What about in C++? What if, instead, I wanted to deallocate the first 1024 bytes, and keep the rest (in both languages)? What about deallocating from the middle (it seems to me that this would require splitting it into two pointers, before and after the deallocated region).
Don't try and second-guess memory management. It's usually cleverer than you ;-)
The only thing you can achieve is the first scenario to 'deallocate' the last 1K
char * foo = malloc(4096);
foo = realloc(foo, 4096-1024);
However, even in this case, there is NO GUARANTEE that "foo" will be unchanged. Your entire 4K may be freed, and realloc() may move your memory elsewhere, thus invalidating any pointers to it that you may hold.
This is valid for both C and C++ - however, use of malloc() in C++ is a bad code smell, and most folk would expect you to use new() to allocate storage. And memory allocated with new() cannot be realloc()ed - or at least, not in any kind of portable way. STL vectors would be a much better approach in C++
If you have n bytes of mallocated memory, you can realloc m bytes (where m < n) and thus throw away the last n-m bytes.
To throw away from the beginning, you can malloc a new, smaller buffer and memcpy the bytes you want and then free the original.
The latter option is also available using C++ new and delete. It can also emulate the first realloc case.
You don't have "a pointer allocated to hold 4096 bytes", you have a pointer to an allocated block of 4096 bytes.
If your block was allocated with malloc(), realloc() will allow you to reduce or increase the size of the block. The start address of the block won't necessarily stay the same, though.
You can't change the start address of a malloc'd memory block, which is really what your second scenario is asking. There's also no way to split a malloc'd block.
This is a limitation of the malloc/calloc/realloc/free API -- and implementations may rely on these limitations (for example, keeping bookkeeping information about the allocation immediately before the start address, which would make moving the start address difficult.)
Now, malloc isn't the only allocator out there -- your platform or libraries might provide other ones, or you could write your own (which gets memory from the system via malloc, mmap, VirtualAlloc or some other mechanism) and then hands it out to your program in whatever fashion you desire.
For C++, if you allocate memory with std::malloc, the information above applies. If you're using new and delete, you're allocating storage for and constructing objects, and so changing the size of an allocated block doesn't make sense -- objects in C++ are a fixed size.
You can make it shorter with realloc(). I don't think the rest is possible.
You can use realloc() to apparently make the memory shorter. Note that for some implementations such a call will actually do nothing. You can't free the first bit of the block and retain the last bit.
If you find yourself needing this kind of functionality, you should consider using a more complex data structure. An array is not the correct answer to every programming problem.
http://en.wikipedia.org/wiki/New_(C%2B%2B)
SUMMARY:In contrast to C's realloc, it
is not possible to directly reallocate
memory allocated with new[]. To extend
or reduce the size of a block, one
must allocate a new block of adequate
size, copy over the old memory, and
delete the old block. The C++ standard
library provides a dynamic array that
can be extended or reduced in its
std::vector template.