Memory Allocation/Deallocation? [closed] - c++

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have been looking at memory allocation lately and I am a bit confused about the basics. I haven't been able to wrap my head around the simple stuff. What does it mean to allocate memory? What happens? I would appreciated answers to any of these questions:
Where is the "memory" that is being allocated?
What is this "memory"? Space in an array? Or something else?
What happens exactly when this "memory" gets allocated?
What happens exactly when the memory gets deallocated?
It would also really help me if someone could answer what malloc does in these C++ lines:
char* x;
x = (char*) malloc (8);
Thank you.

The Memory Model
The C++ standard has a memory model. It attempts to model the memory in a computer system in a generic way. The standard defines that a byte is a storage unit in the memory model and that memory is made up of bytes (§1.7):
The fundamental storage unit in the C++ memory model is the byte. [...] The memory available to a C++ program consists of one or more sequences of contiguous bytes.
The Object Model
The standard always provides an object model. This specifies that an object is a region of storage (so it is made up of bytes and resides in memory) (§1.8):
The constructs in a C++ program create, destroy, refer to, access, and manipulate objects. An object is a region of storage.
So there we go. Memory is where objects are stored. To store an object in memory, the required region of storage must be allocated.
Allocation and Deallocation Functions
The standard provides two implicitly declared global scope allocation functions:
void* operator new(std::size_t);
void* operator new[](std::size_t);
How these are implemented is not the standard's concern. All that matters is that they should return a pointer to some region of storage with the number of bytes corresponding to the argument passed (§3.7.4.1):
The allocation function attempts to allocate the requested amount of storage. If it is successful, it shall return the address of the start of a block of storage whose length in bytes shall be at least as large as the requested size. There are no constraints on the contents of the allocated storage on return from the allocation function.
It also defines two corresponding deallocation functions:
void operator delete(void*);
void operator delete[](void*);
Which are defined to deallocate storage that has previously been allocated (§3.7.4.2):
If the argument given to a deallocation function in the standard library is a pointer that is not the null pointer value (4.10), the deallocation function shall deallocate the storage referenced by the pointer, rendering invalid all pointers referring to any part of the deallocated storage.
new and delete
Typically, you should not need to use the allocation and deallocation functions directly because they only give you uninitialised memory. Instead, in C++ you should be using new and delete to dynamically allocate objects. A new-expression obtains storage for the requested type by using one of the above allocation functions and then initialises that object in some way. For example new int() will allocate space for an int object and then initialise it to 0. See §5.3.4:
A new-expression obtains storage for the object by calling an allocation function (3.7.4.1).
[...]
A new-expression that creates an object of type T initializes that object [...]
In the opposite direction, delete will call the destructor of an object (if any) and then deallocate the storage (§5.3.5):
If the value of the operand of the delete-expression is not a null pointer value, the delete-expression will invoke the destructor (if any) for the object or the elements of the array being deleted.
[...]
If the value of the operand of the delete-expression is not a null pointer value, the delete-expression will call a deallocation function (3.7.4.2).
Other Allocations
However, these are not the only ways that storage is allocated or deallocated. Many constructs of the language implicitly require allocation of storage. For example, giving an object definition, like int a;, also requires storage (§7):
A definition causes the appropriate amount of storage to be reserved and any appropriate initialization (8.5) to be done.
C standard library: malloc and free
In addition, the <cstdlib> header brings in the contents of the stdlib.h C standard library, which includes the malloc and free functions. They are also defined, by the C standard, to allocate and deallocate memory, much like the allocation and deallocation functions defined by the C++ standard. Here's the definition of malloc (C99 §7.20.3.3):
void *malloc(size_t size);
Description
The malloc function allocates space for an object whose size is specified by size and
whose value is indeterminate.
Returns
The malloc function returns either a null pointer or a pointer to the allocated space.
And the definition of free (C99 §7.20.3.2):
void free(void *ptr);
Description
The free function causes the space pointed to by ptr to be deallocated, that is, made
available for further allocation. If ptr is a null pointer, no action occurs. Otherwise, if the argument does not match a pointer earlier returned by the calloc, malloc, or realloc function, or if the space has been deallocated by a call to free or realloc,
the behavior is undefined.
However, there's never a good excuse to be using malloc and free in C++. As described before, C++ has its own alternatives.
Answers to Questions
So to answer your questions directly:
Where is the "memory" that is being allocated?
The C++ standard doesn't care. It simply says that the program has some memory which is made up of bytes. This memory can be allocated.
What is this "memory"? Space in an array? Or something else?
As far as the standard is concerned, the memory is just a sequence of bytes. This is purposefully very generic, as the standard only tries to model typical computer systems. You can, for the most part, think of it as a model of the RAM of your computer.
What happens exactly when this "memory" gets allocated?
Allocating memory makes some region of storage available for use by the program. Objects are initialized in allocated memory. All you need to know is that you can allocate memory. The actual allocation of physical memory to your process tends to be done by the operating system.
What happens exactly when the memory gets deallocated?
Deallocating some previously allocated memory causes that memory to be unavailable to the program. It becomes deallocated storage.
It would also really help me if someone could answer what malloc does in these C++ lines:
char* x;
x = (char*) malloc (8);
Here, malloc is simply allocating 8 bytes of memory. The pointer it returns is being cast to a char* and stored in x.

1) Where is the "memory" that is being allocated?
This is completely different based on your operating system, programming environment (gcc vs Visual C++ vs Borland C++ vs anything else), computer, available memory, etc. In general, memory is allocated from what is called the heap, region of memory just waiting around for you to use. It will generally use your available RAM. But there are always exceptions. For the most part, so long as it gives us memory, where it comes from isn't a great concern. There are special types of memory, such as virtual memory, which may or may not actually be in RAM at any given time and may get moved off to your hard drive (or similar storage device) if you run out of real memory. A full explanation would be very long!
2) What is this "memory"? Space in an array? Or something else?
Memory is generally the RAM in your computer. If it is helpful to think of memory as a gigantic "array", it certain operates like one, then think of it as a ton of bytes (8 bit values, much like unsigned char values). It starts at an index of 0 at the bottom of memory. Just like before, though, there are tons of exceptions here and some parts of memory may be mapped to hardware, or may not even exist at all!
3) What happens exactly when this "memory" gets allocated?
At any given time there should be (we really hope!) some of it available for software to allocate. How it gets allocated is highly system dependent. In general, a region of memory is allocated, the allocator marks it as used, and then a pointer is given to you to use that tells the program where in all of your system's memory that memory is located. In your example, the program will find a consecutive block of 8 bytes (char) and return a pointer to where it found that block after it marks it as "in use".
4) What happens exactly when the memory gets deallocated?
The system marks that memory as available for use again. This is incredibly complicated because this will often cause holes in memory. Allocate 8 bytes then 8 more bytes, then deallocate the first 8 bytes and you've got a hole. There are entire books written on handling deallocation, memory allocation, etc. So hopefully the short answer will be sufficient!
5) It would also really help me if someone could answer what malloc does in these C++ lines:
REALLY crudely, and assuming it's in a function (by the way, never do this because it doesn't deallocate your memory and causes a memory leak):
void mysample() {
char *x; // 1
x = (char *) malloc(8); // 2
}
1) This is a pointer reserved in the local stack space. It has not be initialized so it points to whatever that bit of memory had in it.
2) It calls malloc with a parameter of 8. The cast just let's C/C++ know you intend for it to be a (char *) because it returns a (void *) meaning it has no type applied. Then the resulting pointer is stored in your x variable.
In very crude x86 32bit assembly, this will look vaguely like
PROC mysample:
; char *x;
x = DWord Ptr [ebp - 4]
enter 4, 0 ; Enter and preserve 4 bytes for use with
; x = (char *) malloc(8);
push 8 ; We're using 8 for Malloc
call malloc ; Call malloc to do it's thing
sub esp, 4 ; Correct the stack
mov x, eax ; Store the return value, which is in EAX, into x
leave
ret
The actual allocation is vaguely described in point 3. Malloc usually just calls a system function for this that handles all the rest, and like everything else here, it's wildly different from OS to OS, system to system, etc.

1 . Where is the "memory" that is being allocated?
From a language perspective, this isn't specified, and mostly because the fine details often don't matter. Also, the C++ standard tends to err on the side of under-specifying hardware details, to minimise unnecessary restrictions (both on the platforms compilers can run on, and on possible optimisations).
sftrabbit's answer gives a great overview of this end of things (and it's all you really need), but I can give a couple of worked examples in case that helps.
Example 1:
On a sufficiently old single-user computer (or a sufficiently small embedded one), most of the physical RAM may be directly available to your program. In this scenario, calling malloc or new is essentially internal book-keeping, allowing the runtime library to track which chunks of that RAM are currently in use. You can do this manually, but it gets tedious pretty quickly.
Example 2:
On a modern multitasking operating system, the physical RAM is shared with many processes and other tasks including kernel threads. It's also used for disk caching and I/O buffering in the background, and is augmented by the virtual memory subsystem which can swap data to disk (or some other storage device) when they're not being used.
In this scenario, calling new may first check whether your process already has enough space free internally, and request more from the OS if not. Whatever memory is returned may be physical, or it may be virtual (in which case physical RAM may not be assigned to store it until it's actually accessed). You can't even tell the difference, at least without using platform-specific APIs, because the memory hardware and kernel conspire to hide it from you.
2 . What is this "memory"? Space in an array? Or something else?
In example 1, it's something like space in an array: the address returned identifies an addressable chunk of physical RAM. Even here, RAM addresses aren't necessarily flat or contiguous - some addresses may be reserved for ROM, or for I/O ports.
In example 2, it's an index into something more virtual: your process' address space. This is an abstraction used to hide the underlying virtual memory details from your process. When you access this address, the memory hardware may directly access some real RAM, or it might need to ask the virtual memory subsystem to provide some.
3 . What happens exactly when this "memory" gets allocated?
In general, a pointer is returned which you can use to store as many bytes as you asked for. In both cases, malloc or the new operator will do some housekeeping to track which parts of your process' address space are used and which are free.
4 . What happens exactly when the memory gets deallocated?
Again in general, free or delete will do some housekeeping so they know that memory is available to be re-allocated.
It would also really help me if someone could answer what malloc does in these C++ lines:
char* x;
x = (char*) malloc (8);
It returns a pointer which is either NULL (if it couldn't find the 8 bytes you want), or some non-NULL value.
The only things you can usefully say about this non-NULL value are that:
it's legal (and safe) to access each of those 8 bytes x[0]..x[7],
it's illegal (undefined behaviour) to access x[-1] or x[8] or actually any x[i] unless 0 <= i <= 7
it's legal to compare any of x, x+1, ..., x+8 (although you can't dereference the last of those)
if your platform/hardware/whatever have any restrictions on where you can store data in memory, then x meets them

To allocate memory means to ask the operating system for memory. It means that it is the program itself to ask for "space" in RAM when only when it needs it. For example if you want to use an array but you don't know its size before the program runs, you can do two things:
- declare and array[x] with x dediced by you, arbitrary long. For example 100. But what about if your program just needs an array of 20 elements? You are wasting memory for nothing.
- then you program can malloc an array of x elements just when it knows the correct size of x.
Programs in memory are divided in 4 segments:
-stack (needed for call to functions)
-code (the bibary executable code)
- data (global variables/data)
- heap, in this segment you find the allocated memory.
When you decide you don't need the allocated memory anymore, you give it back to the operating system.
If you want to alloc and array of 10 integers, you do:
int *array = (int *)malloc(sizeof(int) * 10)
And then you give it back to the os with
free(array)

Related

I don't understand about memory issue of appending string

Runtime error: pointer index expression with base 0x000000000000 overflowed to 0xffffffffffffffff for frequency sort
In first answer of that link, it says that appending char to string can cause memory issue.
string s = "";
char c = 'a';
int max = INT_MAX;
for(int j=0;j<max;j++)
s = s + c;
The answer explains [s=s+c in above code copies the same string again and again so it will cause memory issue.] But I don't understand why that code copies the same string again and again.
Is there someone who is likely to make me understand that part :)?
I don't understand why that code copies the same string again and
again.
Okay, let's look at the what happens each time the loop is iterated:
s = s + c;
There are three things the program has to do in order to execute that line of code:
Compute the temporary value s + c -- to do that, the program has to create a temporary, anonymous std::string object, and allocate for it (from the heap) an internal byte-buffer that is at least one byte larger than the number of chars currently in s (so that it can hold all of s's old contents, plus the additional char provided by c)
Set s equal to the temporary-string. In C++03 and earlier, this would be done by reallocating s's internal byte-buffer to be larger, then copying all of the bytes from the temporary-string into s's new/larger buffer. C++11 optimizes this a bit via the new move-assignment operator, so that all the bytes don't have to be copied; rather, s can simply take ownership of the temporary-string's byte-buffer.
Free the temporary string's resources, now that we're done using it. In practice, this takes the form of the std::string class's destructor calling delete[] on the old (no-longer-large-enough) byte-buffer.
Given that the above is going to be performed at least 2 billion times in a loop, it's already quite inefficient.
However, what I think the answer you referred to was particularly concerned about was heap fragmentation. Keep in mind that heap allocation doesn't work by magic; when you (or the std::string class, or anyone) asks to allocate N bytes of memory from the heap, the heap implementation's job is to find N bytes of contiguous memory and return it. And since there is no provision in C++ for moving blocks of memory around (as doing so would invalidate any pointers that the program might have pointing into those blocks of memory), the heap can't create an N-byte contiguous-memory-chunk out of smaller chunks; instead, there has to be a range of contiguous-memory-space already available. For example, it does the heap no good to have a total of 1GB of memory available, if that 1GB of memory is made up of thousands of nonconsecutive 1KB chunks and the caller is asking for a 2KB allocation.
Therefore, the heap's job is to efficiently allocate chunks of memory of the sizes the program requests, and when they are freed again, it will try to glue them back together into larger chunks again if it can, but it may not always be able to. Certain patterns of allocating and freeing memory may result in heap fragmentation, which is simply a large number of discontinuous memory allocations that render the small regions of free memory between them unusable for large allocations.
Whether or not this particular allocate/free pattern would cause that, I'm not sure; given that only one or two buffers are being allocated at a time, the heap may be able to reabsorb them back into adjacent free-memory chunks as they get freed again -- it probably depends on the particular heap algorithm the system is using, as well as on whether any other threads are allocating/freeing heap memory while this is going on. But I wouldn't be too surprised if there are systems out there where it would cause problems (particularly on 16-bit or 32-bit systems where virtual address space is limited, or embedded systems that don't use virtual memory)

How operator new knows that memory is allocated [duplicate]

This question already has answers here:
How do malloc() and free() work?
(13 answers)
Closed 7 years ago.
In C++, how may operator new save information that a piece of memory is allocated? AFAIK, it does not work for constant time and have to search for free memory in heap. Or, maybe, it is not about C++, but about OS?
P.S. I do not know whether it is specified by standard or not, whether it is managed by OS or by C++, but how may it in fact be implemented?
There's no simple, standard answer. Most implementations of operator
new/operator delete ultimately forward to malloc/free, but there are a
lot of different algorithms which can be used for those. The only thing that's
more or less universal is that allocation will typically allocate a little bit
more than requested, and use the extra memory (normally in front of the address
actually returned) to maintain some additional information: either the actual
size allocated or a pointer to the end of the block (or the start of the next
block). Except that some algorithms will keep allocations of the same size
together, and be able to determine the size from the address. There is no
single answer to your question.
new is oftentimes implemented on basis of malloc/free.
How does malloc/free implement it? The answer is: It depends on the implementation. Surprisingly: Malloc oftentimes does not keep track of the allocated blocks at all! The only thing, malloc is doing most of the time, is adding a little bit of information containing the size of the block "before" the allocated block. Meaning, that when you allocate 40 bytes, it will allocate 44 bytes (on 32bit machines) and writes the size in the first 4 bytes. It will return the address of this chunk+4 to you.
Malloc/free keeps track of a freelist. A freelist is a list of freed memory chunks that is not (yet) be given back to the operating system. Malloc searches the freelist, when a new block is needed and when a fitting block is available uses that.
But a more exhausting answer about malloc/free, I have given here:
How do malloc() and free() work?
One additional information:
One implication of the fact, that many allocators don't track allocated blocks: When you return memory by free or delete and pass a pointer in, that was not allocated before, you will corrupt your heap, since the system is not able to check if the pointer is valid. The really ugly thing about it is, that in such a case, your program will not dump immediately, but any time after the error-cause occured ... and thus this error would be really ugly to find. That is one reason, memory handling in C/C++ is so hard!
new maintains a data structure to keep track of individually allocated blocks. There are plenty of ways for doing that. Usually, some kind of linked list is used.
Here a small article to illustrate this.

Why can't we allocate dynamic memory on the stack?

Allocating stuff on the stack is awesome because than we have RAII and don't have to worry about memory leaks and such. However sometimes we must allocate on the heap:
If the data is really big (recommended) - because the stack is small.
If the size of the data to be allocated is only known at runtime (dynamic allocation).
Two questions:
Why can't we allocate dynamic memory (i.e. memory of size that is
only known at runtime) on the stack?
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable? I.e. Thing t;.
Edit: I know some compilers support Variable Length Arrays - which is dynamically allocated stack memory. But that's really an exception to the general rule. I'm interested in understanding the fundamental reasons for why generally, we can't allocate dynamic memory on the stack - the technical reasons for it and the rational behind it.
Why can't we allocate dynamic memory (i.e. memory of size that is only known at runtime) on the stack?
It's more complicated to achieve this. The size of each stack frame is burned-in to your compiled program as a consequence of the sort of instructions the finished executable needs to contain in order to work. The layout and whatnot of your function-local variables, for example, is literally hard-coded into your program through the register and memory addresses it describes in its low-level assembly code: "variables" don't actually exist in the executable. To let the quantity and size of these "variables" change between compilation runs greatly complicates this process, though it's not completely impossible (as you've discovered, with non-standard variable-length arrays).
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable
This is just a consequence of the syntax. C++'s "normal" variables happen to be those with automatic or static storage duration. The designers of the language could technically have made it so that you can write something like Thing t = new Thing and just use a t all day, but they did not; again, this would have been more difficult to implement. How do you distinguish between the different types of objects, then? Remember, your compiled executable has to remember to auto-destruct one kind and not the other.
I'd love to go into the details of precisely why and why not these things are difficult, as I believe that's what you're after here. Unfortunately, my knowledge of assembly is too limited.
Why can't we allocate dynamic memory (i.e. memory of size that is only known at runtime) on the stack?
Technically, this is possible. But not approved by the C++ standard. Variable length arrays(VLA) allows you to create dynamic size constructs on stack memory. Most compilers allow this as compiler extension.
example:
int array[n];
//where n is only known at run-time
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable? I.e. Thing t;.
We can. Whether you do it or not depends on implementation details of a particular task at hand.
example:
int i;
int *ptr = &i;
We can allocate variable length space dynamically on stack memory by using function _alloca. This function allocates memory from the program stack. It simply takes number of bytes to be allocated and return void* to the allocated space just as malloc call. This allocated memory will be freed automatically on function exit.
So it need not to be freed explicitly. One has to keep in mind about allocation size here, as stack overflow exception may occur. Stack overflow exception handling can be used for such calls. In case of stack overflow exception one can use _resetstkoflw() to restore it back.
So our new code with _alloca would be :
int NewFunctionA()
{
char* pszLineBuffer = (char*) _alloca(1024*sizeof(char));
…..
// Program logic
….
//no need to free szLineBuffer
return 1;
}
Every variable that has a name, after compilation, becomes a dereferenced pointer whose address value is computed by adding (depending on the platform, may be "subtracting"...) an "offset value" to a stack-pointer (a register that contains the address the stack actually is reaching: usually "current function return address" is stored there).
int i,j,k;
becomes
(SP-12) ;i
(SP-8) ;j
(SP-4) ;k
To let this "sum" to be efficient, the offsets have to be constant, so that they can be encode directly in the instruction op-code:
k=i+j;
become
MOV (SP-12),A; i-->>A
ADD A,(SP-8) ; A+=j
MOV A,(SP-4) ; A-->>k
You see here how 4,8 and 12 are now "code", not "data".
That implies that a variable that comes after another requires that "other" to retain a fixed compile-time defined size.
Dynamically declared arrays can be an exception, but they can only be that last variable of a function. Otherwise, all the variables that follows will have an offset that have to be adjusted run-time after that array allocation.
This creates the complication that dereferencing the addresses requires arithmetic (not just a plain offset) or the capability to modify the opcode as variables are declared (self modifying code).
Both the solution becomes sub-optimal in term of performance, since all can break the locality of the addressing, or add more calculation for each variable access.
Why can't we allocate dynamic memory (i.e. memory of size that is only known at runtime) on the stack?
You can with Microsoft compilers using _alloca() or _malloca(). For gcc, it's alloca()
I'm not sure it's part of the C / C++ standards, but variations of alloca() are included with many compilers. If you need aligned allocation, such a "n" bytes of memory starting on a "m" byte boundary (where m is a power of 2), you can allocate n+m bytes of memory, add m to the pointer and mask off the lower bits. Example to allocate hex 1000 bytes of memory on a hex 100 boundary. You don't need to preserve the value returned by _alloca() since it's stack memory and automatically freed when the function exits.
char *p;
p = _alloca(0x1000+0x100);
(size_t)p = ((size_t)0x100 + (size_t)p) & ~(size_t)0xff;
Most important reason is that Memory used can be deallocated in any order but stack requires deallocation of memory in a fixed order i.e LIFO order.Hence practically it would be difficult to implement this.
Virtual memory is a virtualization of memory, meaning that it behaves as the resource it is virtualizing (memory). In a system, each process has a different virtual memory space:
32-bits programs: 2^32 bytes (4 Gigabytes)
64-bits programs: 2^64 bytes (16 Exabytes)
Because virtual space is so big, only some regions of that virtual space are usable (meaning that only some regions can be read/written just as if it were real memory). Virtual memory regions are initialized and made usable through mapping. Virtual memory does not consume resources and can be considered unlimited (for 64-bits programs) BUT usable (mapped) virtual memory is limited and use up resources.
For every process, some mapping is done by the kernel and other by the user code. For example, before even the code start executing, the kernel maps specific regions of the virtual memory space of a process for the code instructions, global variables, shared libraries, the stack space... etc. The user code uses dynamic allocation (allocation wrappers such as malloc and free), or garbage collectors (automatic allocation) to manage the virtual memory mapping at application-level (for example, if there is no enough free usable virtual memory available when calling malloc, new virtual memory is automatically mapped).
You should differentiate between mapped virtual memory (the total size of the stack, the total current size of the heap...) and allocated virtual memory (the part of the heap that malloc explicitly told the program that can be used)
Regarding this, I reinterpret your first question as:
Why can't we save dynamic data (i.e. data whose size is only known at runtime) on the stack?
First, as other have said, it is possible: Variable Length Arrays is just that (at least in C, I figure also in C++). However, it has some technical drawbacks and maybe that's the reason why it is an exception:
The size of the stack used by a function became unknown at compile time, this adds complexity to stack management, additional register (variables) must be used and it may impede some compiler optimizations.
The stack is mapped at the beginning of the process and it has a fixed size. That size should be increased greatly if variable-size-data is going to be placed there by default. Programs that do not make extensive use of the stack would waste usable virtual memory.
Additionally, data saved on the stack must be saved and deleted in Last-In-First-Out order, which is perfect for local variables within functions but unsuitable if we need a more flexible approach.
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable?
As this answer explains, we can.
Read a bit about Turing Machines to understand why things are the way they are. Everything was built around them as the starting point.
https://en.wikipedia.org/wiki/Turing_machine
Anything outside of this is technically an abomination and a hack.

C Programming-Stack and Heap Array Declarations

Suppose I declare an array as int myarray[5]
Or declare it as int*myarray=malloc(5*sizeof(int))
Will both the declarations set equal amount of memory in number of bytes?
Without considering that the former declaration is for the stack and the latter on the heap.
Thank you!
There's a fundamental difference, that may not be apparent in the way you use myarray:
int myarray[5]; declares an array of five integers, and the array is an automatic variable (and it is uninitialized).
int * myarray = malloc(5 * sizeof(int)); declares a variable that is a pointer to an int (also as an automatic variable), and that pointer is initialized with the result of a library call. That library call promises to make the resulting pointer point to a region of memory that's big enough to store five consecutive integers.
Because of pointer arithmetic, array-to-pointer decay and the convention that a[i] is the same as *(a + i), you can use both variables in the same way, namely as myarray[i]. This is of course by design.
If you're looking for a difference, then maybe the following helps: The array-of-five-ints is a single object, and it has a definite size. By contrast, the malloc library call doesn't create any objects. It just sets aside enough memory (and suitably aligned), but it could for example allocate a lot more memory.
(In C++ there's of course the additional distinction between memory and objects.)
Neither is guaranteed to allocate exactly 5*sizeof(int) bytes, though both will give you at least that much space (assuming no allocation failures or stack exhaustion).
In the first case, the stack variable may be surrounded by alignment padding, and/or stack canaries (depending on compile options). These could result in the stack pointer being adjusted by more than 5*sizeof(int) bytes.
In the second case, you allocate a int * on the stack (sizeof(int *) bytes), plus the space that malloc returns. malloc may allocate additional memory in the form of allocation tracking structures, alignment padding, linked-list pointers, etc. Thus, in that case you are also not guaranteed to allocate exactly 5*sizeof(int) bytes.
If you want to be very precise about your memory usage, the mmap function allows you to request pages of virtual memory from the OS. The memory you request this way will be precisely the amount you request (ignoring the space taken up in the kernel to track those allocations).
Short Answer: No normally the latter use a small larger memory.
Long Answer:
Memory management will certainly use some extra memory to manage the returned pointer and be able to track it, and free it at a later time and you declare an extra pointer to point to that memory. So its actual memory is sizeof(int*) + malloc_overhead. But in first case you use exactly 5 int(plus alignment possibly).
Dynamic allocation will require at least a few extra bytes; however many bytes for the pointer variable in addition to the 5 int-sized elements, and potentially some extra bytes to track the size of the allocated region so that it can be freed properly.

c++: local array definition versus a malloc call

What is the difference between this:
somefunction() {
...
char *output;
output = (char *) malloc((len * 2) + 1);
...
}
and this:
somefunction() {
...
char output[(len * 2) + 1];
...
}
When is one more appropriate than the other?
thanks all for your answers. here is a summary:
ex. 1 is heap allocation
ex. 2 is stack allocation
there is a size limitation on the stack, use it for smaller allocations
you have to free heap allocation, or it will leak
the stack allocation is not accessible once the function exits
the heap allocation is accessible until you free it (or the app ends)
VLA's are not part of standard C++
corrections welcome.
here is some explanation of the difference between heap vs stack:
What and where are the stack and heap?
The first allocates memory on the heap. You have to remember to free the memory, or it will leak. This is appropriate if the memory needs to used outside the function, or if you need to allocate a huge amount of memory.
The second allocates memory on the stack. It will be reclaimed automatically when the function returns. This is the most convenient if you don't need to return the memory to your caller.
Use locals when you only have a small amount of data, and you are not going to use the data outside the scope of the function you've declared it in. If you're going to pass the data around, use malloc.
Local variables are held on the stack, which is much more size limited than the heap, where arrays allocated with malloc go. I usually go for anything > 16 bytes being put on the heap, but you have a bit more flexibility than that. Just don't be allocating locals in the kb/mb size range - they belong on the heap.
The first example allocates a block of storage from the heap. The second one allocates storage from the stack. The difference becomes visible when you return output from somefunction(). The dynamically allocated storage is still available for your use, but the stack-based storage in the second example is, um, nowhere. You can still write into this storage and read it for awhile, until the next time you call a function, at which time the storage will get overwritten randomly with return addresses, arguments, and such.
There's a lot of other weird stuff going on with the code in this question too. First off, it this is a c++ program, you'd want to use new instead of malloc() so you'd say
output = new char[len+1];
And what's with the len*2 + 1 anyway? Maybe this is something special in your code, but I'm guessing you want to allocate unicode characters or multibyte characters. If it's unicode, the null termination takes two bytes as well as each character does, and char is the wrong type, being 8 bit bytes in most compilers. If it's multibyte, then hey, all bets are off.
First some terminology:
The first sample is called heap allocation.
The second sample is called stack allocation.
The general rule is: allocate on the stack, unless:
The required size of the array is unknown at compile time.
The required size exceeds 10% of the total stack size. The default stack size on Windows and Linux is usually 1 or 2 MB. So your local array should not exceed 100,000 bytes.
You tagged your question with both C++ and C, but the second solution is not allowed in C++. Variable length arrays are only allowed in C(99).
If you were to assume 'len' is a constant, both will work.
malloc() (and C++'s 'new') allocate the memory on the heap, which means you have to free() (or if you allocated with 'new', 'delete') the buffer, or the memory will never be reclaimed (leak).
The latter allocates the array on the stack, and will be gone when it goes out of scope. This means that you can't return pointers to the buffer outside the scope it's allocated in.
The former is useful when you want to pass the block of memory around (but in C++, it's best managed with an RAII class, not manually), while the latter is best for small, fixed-size arrays that only need to exist in one scope.
Lastly, you can mark the otherwise stack-allocated array with 'static' to take it off the stack and into a global data section:
static char output[(len * 2) + 1];
This enables you to return pointers to the buffer outside of its scope, however, all calls to such a function will refer to the same piece of global data, so don't use it if you need a unique block of memory every time.
Lastly, don't use malloc in C++ unless you have a really good reason (i.e, realloc). Use 'new' instead, and the accompanying 'delete'.