Why can't we declare an array, say of int data type, of any size within the memory limit? - c++

int A[10000000]; //This gives a segmentation fault
int *A = (int*)malloc(10000000*sizeof(int));//goes without any set fault.
Now my question is, just out of curiosity, that if ultimately we are able to allocate higher space for our data structures, say for example, BSTs and linked lists created using the pointers approach in C have no as such memory limit(unless the total size exceeds the size of RAM for our machine) and for example, in the second statement above of declaring a pointer type, why is that we can't have an array declared of higher size(until it reaches the memory limit!!)...Is this because the space allocated is contiguous in a static sized array?.But then from where do we get the guarantee that in the next 1000000 words in RAM no other piece of code would be running...??
PS: I may be wrong in some of the statements i made..please correct in that case.

Firstly, in a typical modern OS with virtual memory (Linux, Windows etc.) the amount of RAM makes no difference whatsoever. Your program is working with virtual memory, not with RAM. RAM is just a cache for virtual memory access. The absolute limiting factor for maximum array size is not RAM, it is the size of the available address space. Address space is the resource you have to worry about in OSes with virtual memory. In 32-bit OSes you have 4 gigabytes of address space, part of which is taken up for various household needs and the rest is available to you. In 64-bit OSes you theoretically have 16 exabytes of address space (less than that in practical implementations, since CPUs usually use less than 64 bits to represent the address), which can be perceived as practically unlimited.
Secondly, the amount of available address space in a typical C/C++ implementation depends on the memory type. There's static memory, there's automatic memory, there's dynamic memory. The address space limits for each memory type are pre-set in advance by the compiler. Which raises the question: where are you declaring your large array? Which memory type? Automatic? Static? You provided no information, but this is absolutely necessary. If you are attempting to declare it as a local variable (automatic memory), then no wonder it doesn't work, since automatic memory (aka "stack memory") has very limited address space assigned to it. Your array simply does not fit. Meanwhile, malloc allocates dynamic memory, which normally has the largest amount of address space available.
Thirdly, many compilers provide you with options that control the initial distribution of address space between different kinds of memory. You can request a much larger stack size for your program by manipulating such options. Quite possibly you can request a stack so large, than your local array will fit in it without any problems. But in practice, for obvious reasons, it makes very little sense to declare huge arrays as local variables.

Assuming local variables, this is because on modern implementations automatic variables will be allocated on the stack which is very limited in space. This link gives some of the common stack sizes:
platform default size
=====================================
SunOS/Solaris 8172K bytes
Linux 8172K bytes
Windows 1024K bytes
cygwin 2048K bytes
The linked article also notes that the stack size can be changed for example in Linux, one possible way from the shell before running your process would be:
ulimit -s 32768 # sets the stack size to 32M bytes
While malloc on modern implementations will come from the heap, which is only limited to the memory you have available to the process and in many cases you can even allocate more than is available due to overcommit.

I THINK you're missing the difference between total memory, and your programs memory space. Your program runs in an environment created by your operating system. It grants it a specific memory range to the program, and the program has to try to deal with that.
The catch: Your compiler can't 100% know the size of this range.
That means your compiler will successfully build, and it will REQUEST that much room in memory when the time comes to make the call to malloc (or move the stack pointer when the function is called). When the function is called (creating a stack frame) you'll get a segmentation fault, caused by the stack overflow. When the malloc is called, you won't get a segfault unless you try USING the memory. (If you look at the manpage for malloc() you'll see it returns NULL when there's not enough memory.)
To explain the two failures, your program is granted two memory spaces. The stack, and the heap. Memory allocated using malloc() is done using a system call, and is created on the heap of your program. This dynamically accepts or rejects the request and returns either the start address, or NULL, depending on a success or fail. The stack is used when you call a new function. Room for all the local variables is made on the stack, this is done by program instructions. Calling a function can't just FAIL, as that would break program flow completely. That causes the system to say "You're now overstepping" and segfault, stopping the execution.

Related

delete[] does not free memory of array returned by function [duplicate]

I am observing the following behavior in my test program:
I am doing malloc() for 1 MB and then free() it after sleep(10). I am doing this five times. I am observing memory consumption in top while the program is running.
Once free()-d, I am expecting the program's virtual memory (VIRT) consumption to be down by 1 MB. But actually it isn't. It stays stable. What is the explanation for this behavior? Does malloc() do some reserve while allocating memory?
Once free()-d, I am expecting program's virtual memory (VIRT) consumption to be down by 1MB.
Well, this is not guaranteed by the C standard. It only says, once you free() the memory, you should not be accessing that any more.
Whether the memory block is actually returned to the available memory pool or kept aside for future allocations is decided by the memory manager.
The C standard doesn't force on the implementer of malloc and free to return the memory to the OS directly. So different C library implementations will behave differently. Some of them might give it back directly and some might not. In fact, the same implementation will also behave differently depending on the allocation sizes and patterns.
This behavior, of course, is for good reasons:
It is not always possible. OS-level memory allocations usually are done in pages (4KB, 4MB, or ... sizes at once). And if a small part of the page is still being used after freeing another part then the page cannot be given back to the operating system until that part is also freed.
Efficiency. It is very likely that an application will ask for memory again. So why give it back to the OS and ask for it again soon after. (of course, there is probably a limit on the size of the memory kept.)
In most cases, you are not accountable for the memory you free if the implementation decided to keep it (assuming it is a good implementation). Sooner or later it will be reallocated or returned to the OS. Hence, optimizing for memory usage should be based on the amount you have malloc-ed and you haven't free-d. The case where you have to worry about this, is when your allocation patterns/sizes start causing memory fragmentation which is a very big topic on its own.
If you are, however, on an embedded system and the amount of memory available is limited and you need more control over when/how memory is allocated and freed then you need to ask for memory pages from the OS directly and manage it manually.
Edit: I did not explain why you are not accountable for memory you free.
The reason is, on a modern OS, allocated memory is virtual. Meaning if you allocate 512MB on 32-bit system or 10TB of 64-bit system, as long as you don't read or write to that memory, it will not reserve any physical space for it. Actually, it will only reserve physical memory for the pages you touch from that big block and not the entire block. And after "a while of not using that memory", its contents will be copied to disk and the underlying physical memory will be used for something else.
This is very dependent on the actual malloc implementation in use.
Under Linux, there is a threshold (MMAP_THRESHOLD) to decide where the memory for a given malloc() request comes from.
If the requested amount is below or equal to MMAP_THRESHOLD, the request is satisfied by either taking it from the so-called "free list", if any memory blocks have already been free()d. Otherwise, the "break line" of the program (i. e. the end of the data segment) is increased and the memory made available to the program by this process is used for the request.
On free(), the freed memory block is added to the free list. If there is enough free memory at the very end of the data segment, the break line (mentionned above) is moved again to shrink the data segment, returning the excess memory to the OS.
If the requested amount exceeds MMAP_THRESHOLD, a separate memory block is requested by the OS and returned again during free().
See also https://linux.die.net/man/3/malloc for details.

Des-initializing a region of memory

I have learn in the few past days the issue with memory overcommitment (when memory overcommit is activated, which is usually a default), which basically means that:
void* p = malloc(100);
the operative system gives you 100 contiguous (virtual) addresses taken from the (virtual) address space of your process, whose total range is OS-defined. Since that memory region has not been initialized yet, it doesn't count as ocuppied storage from a system-wide point of view, so it's a pure abstraction besides consuming your virtual addresses.
memset(p, 0, 5);
That uses the first 5 bytes, so from the point of view of the OS, your process ocuppies now 5 extra bytes, and so the system has 5 bytes less of free storage. You have still 95 bytes of uninitialized storage.
The system only crash or start killing processes when the combined ocuppied storage (initialized) of every process is beyond what the OS can hold.
If my understanding is right at this regard, is there a way to "des-"initialize a region of memory when you are done with it, in order to increase the system-wide free space, without loosing the address region requested by malloc or aligned_malloc (so you don't increase fragmentation over time)?
The purpose of this question is more theoretical than practical and not about actually "freeing memory", but about freeing memory while conserving already assigned virtual addresses.
Source about the difference between requesting virtual addresses and ocuppying storage: https://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6
PD: With knowing it for Linux to fill my curiosity I'm ok.
No, there is no way.
On most systems, as soon as you allocate memory, it counts towards RAM or swap.
As your link shows, on Linux, you may need to access the memory once so that the memory actually gets allocated. But as soon as you do, the system must keep that memory available somewhere, in case you access it later.
The way to tell the system you are done with the memory is to actually free it.

Some problem about the initialization of the vector

I can initialize the vector with 10^8,but I can't initialize it with 10^9.Why?
vector<int> bucket;
bucket.resize(100000000); √
bucket.resize(1000000000); ×
It's because resize function will apply memory from heap. As you can figure that, the size will be 4000000000 bytes in your second resize operation, which is larger than the space your system could allocate(may be your computer couldn't find a piece of continuous space for you), and will cause exception and failure.
The maximum memory you can apply for depends on many reasons as follow:
the hardware limitation of physical memory.
the os bit(32 or 64)
memory left for user. Operating system should meet the kernel's need first. Generally speaking, windows kernel needs more memory than linux or unix.
..
In a word, it is hard to know accurate memory size you can use, because it's a dynamic value. But you can make a rough estimation by new operator, and here is a good reference.
C++ vectors allocate memory in a contiguous block and it is likely that the operating system cannot find such a block when the block size gets too large.
Would the error message you are getting indicate that you are running out of memory?
The point is: Even if you think that you have enough memory left on your system, if your program's address space can not accommodate the large block in one chunk then you cannot construct the large vector (the maximum address space size may differ for 32-bit and 64-bit programs).

Why can't we allocate dynamic memory on the stack?

Allocating stuff on the stack is awesome because than we have RAII and don't have to worry about memory leaks and such. However sometimes we must allocate on the heap:
If the data is really big (recommended) - because the stack is small.
If the size of the data to be allocated is only known at runtime (dynamic allocation).
Two questions:
Why can't we allocate dynamic memory (i.e. memory of size that is
only known at runtime) on the stack?
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable? I.e. Thing t;.
Edit: I know some compilers support Variable Length Arrays - which is dynamically allocated stack memory. But that's really an exception to the general rule. I'm interested in understanding the fundamental reasons for why generally, we can't allocate dynamic memory on the stack - the technical reasons for it and the rational behind it.
Why can't we allocate dynamic memory (i.e. memory of size that is only known at runtime) on the stack?
It's more complicated to achieve this. The size of each stack frame is burned-in to your compiled program as a consequence of the sort of instructions the finished executable needs to contain in order to work. The layout and whatnot of your function-local variables, for example, is literally hard-coded into your program through the register and memory addresses it describes in its low-level assembly code: "variables" don't actually exist in the executable. To let the quantity and size of these "variables" change between compilation runs greatly complicates this process, though it's not completely impossible (as you've discovered, with non-standard variable-length arrays).
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable
This is just a consequence of the syntax. C++'s "normal" variables happen to be those with automatic or static storage duration. The designers of the language could technically have made it so that you can write something like Thing t = new Thing and just use a t all day, but they did not; again, this would have been more difficult to implement. How do you distinguish between the different types of objects, then? Remember, your compiled executable has to remember to auto-destruct one kind and not the other.
I'd love to go into the details of precisely why and why not these things are difficult, as I believe that's what you're after here. Unfortunately, my knowledge of assembly is too limited.
Why can't we allocate dynamic memory (i.e. memory of size that is only known at runtime) on the stack?
Technically, this is possible. But not approved by the C++ standard. Variable length arrays(VLA) allows you to create dynamic size constructs on stack memory. Most compilers allow this as compiler extension.
example:
int array[n];
//where n is only known at run-time
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable? I.e. Thing t;.
We can. Whether you do it or not depends on implementation details of a particular task at hand.
example:
int i;
int *ptr = &i;
We can allocate variable length space dynamically on stack memory by using function _alloca. This function allocates memory from the program stack. It simply takes number of bytes to be allocated and return void* to the allocated space just as malloc call. This allocated memory will be freed automatically on function exit.
So it need not to be freed explicitly. One has to keep in mind about allocation size here, as stack overflow exception may occur. Stack overflow exception handling can be used for such calls. In case of stack overflow exception one can use _resetstkoflw() to restore it back.
So our new code with _alloca would be :
int NewFunctionA()
{
char* pszLineBuffer = (char*) _alloca(1024*sizeof(char));
…..
// Program logic
….
//no need to free szLineBuffer
return 1;
}
Every variable that has a name, after compilation, becomes a dereferenced pointer whose address value is computed by adding (depending on the platform, may be "subtracting"...) an "offset value" to a stack-pointer (a register that contains the address the stack actually is reaching: usually "current function return address" is stored there).
int i,j,k;
becomes
(SP-12) ;i
(SP-8) ;j
(SP-4) ;k
To let this "sum" to be efficient, the offsets have to be constant, so that they can be encode directly in the instruction op-code:
k=i+j;
become
MOV (SP-12),A; i-->>A
ADD A,(SP-8) ; A+=j
MOV A,(SP-4) ; A-->>k
You see here how 4,8 and 12 are now "code", not "data".
That implies that a variable that comes after another requires that "other" to retain a fixed compile-time defined size.
Dynamically declared arrays can be an exception, but they can only be that last variable of a function. Otherwise, all the variables that follows will have an offset that have to be adjusted run-time after that array allocation.
This creates the complication that dereferencing the addresses requires arithmetic (not just a plain offset) or the capability to modify the opcode as variables are declared (self modifying code).
Both the solution becomes sub-optimal in term of performance, since all can break the locality of the addressing, or add more calculation for each variable access.
Why can't we allocate dynamic memory (i.e. memory of size that is only known at runtime) on the stack?
You can with Microsoft compilers using _alloca() or _malloca(). For gcc, it's alloca()
I'm not sure it's part of the C / C++ standards, but variations of alloca() are included with many compilers. If you need aligned allocation, such a "n" bytes of memory starting on a "m" byte boundary (where m is a power of 2), you can allocate n+m bytes of memory, add m to the pointer and mask off the lower bits. Example to allocate hex 1000 bytes of memory on a hex 100 boundary. You don't need to preserve the value returned by _alloca() since it's stack memory and automatically freed when the function exits.
char *p;
p = _alloca(0x1000+0x100);
(size_t)p = ((size_t)0x100 + (size_t)p) & ~(size_t)0xff;
Most important reason is that Memory used can be deallocated in any order but stack requires deallocation of memory in a fixed order i.e LIFO order.Hence practically it would be difficult to implement this.
Virtual memory is a virtualization of memory, meaning that it behaves as the resource it is virtualizing (memory). In a system, each process has a different virtual memory space:
32-bits programs: 2^32 bytes (4 Gigabytes)
64-bits programs: 2^64 bytes (16 Exabytes)
Because virtual space is so big, only some regions of that virtual space are usable (meaning that only some regions can be read/written just as if it were real memory). Virtual memory regions are initialized and made usable through mapping. Virtual memory does not consume resources and can be considered unlimited (for 64-bits programs) BUT usable (mapped) virtual memory is limited and use up resources.
For every process, some mapping is done by the kernel and other by the user code. For example, before even the code start executing, the kernel maps specific regions of the virtual memory space of a process for the code instructions, global variables, shared libraries, the stack space... etc. The user code uses dynamic allocation (allocation wrappers such as malloc and free), or garbage collectors (automatic allocation) to manage the virtual memory mapping at application-level (for example, if there is no enough free usable virtual memory available when calling malloc, new virtual memory is automatically mapped).
You should differentiate between mapped virtual memory (the total size of the stack, the total current size of the heap...) and allocated virtual memory (the part of the heap that malloc explicitly told the program that can be used)
Regarding this, I reinterpret your first question as:
Why can't we save dynamic data (i.e. data whose size is only known at runtime) on the stack?
First, as other have said, it is possible: Variable Length Arrays is just that (at least in C, I figure also in C++). However, it has some technical drawbacks and maybe that's the reason why it is an exception:
The size of the stack used by a function became unknown at compile time, this adds complexity to stack management, additional register (variables) must be used and it may impede some compiler optimizations.
The stack is mapped at the beginning of the process and it has a fixed size. That size should be increased greatly if variable-size-data is going to be placed there by default. Programs that do not make extensive use of the stack would waste usable virtual memory.
Additionally, data saved on the stack must be saved and deleted in Last-In-First-Out order, which is perfect for local variables within functions but unsuitable if we need a more flexible approach.
Why can we only refer to memory on the heap through pointers, while memory on the stack can be referred to via a normal variable?
As this answer explains, we can.
Read a bit about Turing Machines to understand why things are the way they are. Everything was built around them as the starting point.
https://en.wikipedia.org/wiki/Turing_machine
Anything outside of this is technically an abomination and a hack.

How does a computer 'know' what memory is allocated?

When memory is allocated in a computer, how does it know which bytes are already occupied and can't be overwritten?
So if these are some bytes of memory that aren't being used:
[0|0|0|0]
How does the computer know whether they are or not? They could just be an integer that equals zero. Or it could be empty memory. How does it know?
That depends on the way the allocation is performed, but it generally involves manipulation of data belonging to the allocation mechanism.
When you allocate some variable in a function, the allocation is performed by decrementing the stack pointer. Via the stack pointer, your program knows that anything below the stack pointer is not allocated to the stack, while anything above the stack pointer is allocated.
When you allocate something via malloc() etc. on the heap, things are similar, but more complicated: all theses allocators have some internal data structures which they never expose to the calling application, but which allow them to select which memory addresses to return on an allocation request. Some malloc() implementation, for instance, use a number of memory pools for small objects of fixed size, and maintain linked lists of free objects for each fixed size which they track. That way, they can quickly pop one memory region of that list, only doing more expensive computations when they run out of regions to satisfy a certain request size.
In any case, each of the allocators have to request memory from the system kernel from time to time. This mechanism always works on complete memory pages (usually 4 kiB), and works via the syscalls brk() and mmap(). Again, the kernel keeps track of which pages are visible in which processes, and at which addresses they are mapped, so there is additional memory allocated inside the kernel for this.
These mappings are made available to the processor via the page tables, which uses them to resolve the virtual memory addresses to the physical addresses. So here, finally, you have some hardware involved in the process, but that is really far, far down in the guts of the mechanics, much below anything that a userspace process is ever able to see. Still, even the page tables are managed by the software of the kernel, not by the hardware, the hardware only interpretes what the software writes into the page tables.
First of all, I have the impression that you believe that there is some unoccupied memory that doesn't holds any value. That's wrong. You can imagine the memory as a very large array when each box contains a value whereas someone put something in it or not. If a memory was never written, then it contains a random value.
Now to answer your question, it's not the computer (meaning the hardware) but the operating system. It holds somewhere in its memory some tables recording which part of the memory are used. Also any byte of memory can be overwriten.
In general, you cannot tell by looking at content of memory at some location whether that portion of memory is used or not. Memory value '0' does not mean the memory is not used.
To tell what portions of memory are used you need some structure to tell you this. For example, you can divide memory into chunks and keep track of which chunks are used and which are not.
There are memory blocks, they have an occupied or not occupied. On the heap, there are very complex data structures which organise it. But the answer to your question is too broad.