I was reading [1) about stack pointers and the need of knowing both ebp (start of the stack for the function) and esp (end). The article said that you need to know both because the stack can grow, but I don't see how this can be possible in c/c++. (Im not talking about another function call because to my mind this would make the stack grow, do some stuff, then recursively be popped and back to state before call)
I have done a little bit of research and only saw people saying that new allocates on the heap. But the pointer will be a local variable, right ? And this is known at compile time and reserved in the stack at the time the function is called.
I started to think that maybe with loops you have an uncontrolled number of local variables
int a;
for (int i = 0; i < n; ++i)
int b = i + 3;
but no, this doesn't allocate n times b, and only 1 int is reserved just as it is for a.
So... any example ?
[1): http://en.wikibooks.org/wiki/X86_Disassembly/Functions_and_Stack_Frames
You can allocate memory on the stack with alloca function from stdlib. I don't recommend to use this function in production code. It's to easy to corrupt your stack or get stack overflow.
The use of EBP is more for convenience. It is possible to just use ESP. The problem with that is, as parameters to functions are pushed onto the stack, the address relative to ESP of all the variables and parameters changes.
By setting EBP to a fixed, known position on the stack, usually between the function parameters and the local variables, the address of all these elements remains constant relative to EBP throughout the lifetime of the function. It can also help with debugging, as the value of ESP at the end of the function should equal the value of EBP.
The only way I know of to grow the stack in an indeterminate way at compile time, is to use alloca repeatedly.
A pointer is indeed allocated on the stack. And the size is usually 4 or 8 bytes on 32 and 64bit architectures respectively. So you know the size of a pointer statically, at compile time, and you can keep them on the stack if you choose to do so.
This pointer can point to free store, and you can allocate memory to it dynamically - without having to know the size beforehand. Moreover, it is usually a good idea to keep stack frames empty, and compilers will even "enforce" this with (adjustable) limits. MSVC has 1MB if I recall correctly.
So no, there is no way you can create a stack frame of size that is unknown at compile time. Your stack frame of the code you posted will have room for 3 integers (a, b, i). (And possibly some padding, shadow space, etc, not relevant.) (It is TECHNICALLY possible to extend the size of stackframes at run time but you just don't want to do that almost never.)
Related
As far as I understand, stack variables are stored using an absolute offset to the stack frame pointer.
But how are those variables addressed later?
Consider the following code:
#include <iostream>
int main()
{
int a = 0;
int b = 1;
int c = 2;
std::cout << b << std::endl;
}
How does the compiler know where to find b? Does it store its offset to the stack frame pointer? And if so, where is this information stored? And does that mean that int needs more than 4 bytes to be stored?
The location (relative to the stack pointer) of stack variables is a compile-time constant.
The compiler always knows how many things it's pushed to the stack since the beginning of the function and therefore the relative position of any one of them within the stack frame. (Unless you use alloca or VLAs1.)
On x86 this is usually achieved by addressing relative to the ebp or esp registers, which are typically used to represent the "beginning" and "end" of the stack frame. The offsets themselves don't need to be stored anywhere as they are built into the instruction as part of the addressing scheme.
Note that local variables are not always stored on the stack.
The compiler is free to put them wherever it wants, so long as it behaves as if it were allocated on the stack.
In particular, small objects like integers may simply stay in a register for the full duration of their lifespans (or until the compiler is forced to spill them onto the stack), constants may be stored in read-only memory, or any other optimization that the compiler deems fit.
Footnote 1: In functions that use alloca or a VLA, the compiler will use a separate register (like RBP in x86-64) as a "frame pointer" even in an optimized build, and address locals relative to the frame pointer, not the stack pointer. The amount of named C variables is known at compile time, so they can go at the top of the stack frame where the offset from them to the frame pointer is constant. Multiple VLAs can just work as pointers to space allocated as if by alloca. (That's one typical implementation strategy).
Previously I had seen assembly of many functions in C++. In gcc, all of them start with these instructions:
push rbp
mov rbp, rsp
sub rsp, <X> ; <X> is size of frame
I know that these instructions store the frame pointer of previous function and then sets up a frame for current function. But here, assembly is neither asking for mapping memory (like malloc) and nor it is checking weather the memory pointed by rbp is allocated to the process.
So it assumes that startup code has mapped enough memory for the entire depth of call stack. So exactly how much memory is allocated for call stack? How does startup code can know the maximum depth of call stack?
It also means that, I can access array out of bound for a long distance since although it is not in current frame, it mapped to the process. So I wrote this code:
int main() {
int arr[3] = {};
printf("%d", arr[900]);
}
This is exiting with SIGSEGV when index is 900. But surprisingly not when index is 901. Similarly, it is exiting with SIGSEGV for some random indices and not for some. This behavior was observed when compiled with gcc-x86-64-11.2 in compiler explorer.
How does startup code can know the maximum depth of call stack?
It doesn't.
In most common implementation, the size of the stack is constant.
If the program exceeds the constant sized stack, that is called a stack overflow. This is why you must avoid creating large objects (which are typically, but not necessarily, arrays) in automatic storage, and why you must avoid recursion with linear depth (such as recursive linked list algorithms).
So exactly how much memory is allocated for call stack?
On most desktop/server systems it's configurable, and defaults to one to few megabytes. It can be much less on embedded systems.
This is exiting with SIGSEGV when index is 900. But surprisingly not when index is 901.
In both cases, the behaviour of the program is undefined.
Is it possible to know the allocated stack size?
Yes. You can read the documentation of the target system. If you intend to write a portable program, then you must assume the minimum of all target systems. For desktop/server, 1 megabyte that I mentioned is reasonable.
There is no standard way to acquire the size within C++.
I am trying to better understand items on a stack and how they are addressed. The article I found here seems to indicate that when MIPS stack is initialized, a fixed amount of memory is allocated and the stack grows down to the stack limit which would appear to be smaller addresses. I would assume that based on this logic a stack overflow would occur when 0x0000 was traversed?
I realize MIPS is big endian, but does that change how the stack grows? I wrote what I believed would be a quick way to observe this on an x86_64 machine, but the stack appears to grow up, as I originally assumed it did.
#include <iostream>
#include <vector>
int main() {
std::vector<int*> v;
for( int i = 0; i < 10; i++ ) {
v.push_back(new int);
std::cout << v.back() << std::endl;
}
}
I'm also confused by the fact that not all of the memory address's do not appear to be contiguous, which makes me think I did something stupid. Could somebody please clarify?
The stack on x86 machines also grows downwards. Endianness is unrelated to the direction in which the stack grows.
The stack of a machine has absolutely nothing to do with std::vector<>. Also, new int allocates heap memory, so it tells you absolutely nothing about the stack.
In order to see in which direction the stack grows, you need to do something like this:
recursive( 5 );
void recursive( int n )
{
if( n == 0 )
return;
int a;
printf( "%p\n", &a );
recursive( n - 1 );
}
(Note that if your compiler is smart enough to optimize tail recursion, then you will need to tell it to not optimize it, otherwise the observations will be all wrong.)
essentially there are 3 types of memory you use in programming: static, dynamic/heap and stack.
Static memory is pre-allocated by the compiler and consist of the constants and variables declared statically in your program.
Heap is the memory which you can freely allocate and release
Stack is the memory which gets allocated for all local variables declared in a function. This is important because every time you call the function a new memory for its variables is allocated. So, that every call to a function will assure that it has its own unique copy of the variables. And every time you return from the function the memory gets freed.
It absolutely does not matter how the stack is managed as soon as it follows the above rules. It is convenient however to have program memory to be allocated in the lower address space and grow up, and the stack to start from a top memory space and grow down. Most systems implement this scheme.
In general there is a stack pointer register/variable which points so the current stack address. when a function gets called it decrease this address by the number of bytes it needs for its variables. when it calls the next function, this new one will start with the new pointer already decreased by the caller. When the function returns it restores the pointer which it started from.
There could be different schemes but as far as I know, mips and i86 follow this one.
And essentially there is only one virtual memory space in the program. This is up to the operating system and/or compiler how to use it. The compiler will split the memory in the logical regions for its own use and handle them, hopefully, according to the calling conventions defined in the platform documents.
So, in our example, v and i are allocated on the function stack. cout is static. every new int allocate space in heap. v is not a simple variable but a struct which contains fields which it needs to manage the list. it needs space for all these internals. So, every push_back modifies those fields to point to the allocated 'int' in some way. push_back() and back() are function calls and allocate their own stacks for internal variables to not interfere with the top function.
When I create a new variable in a C++ program, eg a char:
char c = 'a';
how does C++ then have access to this variable in memory? I would imagine that it would need to store the memory location of the variable, but then that would require a pointer variable, and this pointer would again need to be accessed.
See the docs:
When a variable is declared, the memory needed to store its value is
assigned a specific location in memory (its memory address).
Generally, C++ programs do not actively decide the exact memory
addresses where its variables are stored. Fortunately, that task is
left to the environment where the program is run - generally, an
operating system that decides the particular memory locations on
runtime. However, it may be useful for a program to be able to obtain
the address of a variable during runtime in order to access data cells
that are at a certain position relative to it.
You can also refer this article on Variables and Memory
The Stack
The stack is where local variables and function parameters reside. It
is called a stack because it follows the last-in, first-out principle.
As data is added or pushed to the stack, it grows, and when data is
removed or popped it shrinks. In reality, memory addresses are not
physically moved around every time data is pushed or popped from the
stack, instead the stack pointer, which as the name implies points to
the memory address at the top of the stack, moves up and down.
Everything below this address is considered to be on the stack and
usable, whereas everything above it is off the stack, and invalid.
This is all accomplished automatically by the operating system, and as
a result it is sometimes also called automatic memory. On the
extremely rare occasions that one needs to be able to explicitly
invoke this type of memory, the C++ key word auto can be used.
Normally, one declares variables on the stack like this:
void func () {
int i; float x[100];
...
}
Variables that are declared on the stack are only valid within the
scope of their declaration. That means when the function func() listed
above returns, i and x will no longer be accessible or valid.
There is another limitation to variables that are placed on the stack:
the operating system only allocates a certain amount of space to the
stack. As each part of a program that is being executed comes into
scope, the operating system allocates the appropriate amount of memory
that is required to hold all the local variables on the stack. If this
is greater than the amount of memory that the OS has allowed for the
total size of the stack, then the program will crash. While the
maximum size of the stack can sometimes be changed by compile time
parameters, it is usually fairly small, and nowhere near the total
amount of RAM available on a machine.
Assuming this is a local variable, then this variable is allocated on the stack - i.e. in the RAM. The compiler keeps track of the variable offset on the stack. In the basic scenario, in case any computation is then performed with the variable, it is moved to one of the processor's registers and the CPU performs the computation. Afterwards the result is returned back to the RAM. Modern processors keep whole stack frames in the registers and have multiple levels of registers, so it can get quite complex.
Please note the "c" name is no more mentioned in the binary (unless you have debugging symbols). The binary only then works with the memory locations. E.g. it would look like this (simple addition):
a = b + c
take value of memory offset 1 and put it in the register 1
take value of memory offset 2 and put in in the register 2
sum registers 1 and 2 and store the result in register 3
copy the register 3 to memory location 3
The binary doesn't know "a", "b" or "c". The compiler just said "a is in memory 1, b is in memory 2, c is in memory 3". And the CPU just blindly executes the commands the compiler has generated.
C++ itself (or, the compiler) would have access to this variable in terms of the program structure, represented as a data structure. Perhaps you're asking how other parts in the program would have access to it at run time.
The answer is that it varies. It can be stored either in a register, on the stack, on the heap, or in the data/bss sections (global/static variables), depending on its context and the platform it was compiled for: If you needed to pass it around by reference (or pointer) to other functions, then it would likely be stored on the stack. If you only need it in the context of your function, it would probably be handled in a register. If it's a member variable of an object on the heap, then it's on the heap, and you reference it by an offset into the object. If it's a global/static variable, then its address is determined once the program is fully loaded into memory.
C++ eventually compiles down to machine language, and often runs within the context of an operating system, so you might want to brush up a bit on Assembly basics, or even some OS principles, to better understand what's going on under the hood.
Lets say our program starts with a stack address of 4000000
When, you call a function, depending how much stack you use, it will "allocate it" like this
Let's say we have 2 ints (8bytes)
int function()
{
int a = 0;
int b = 0;
}
then whats gonna happen in assembly is
MOV EBP,ESP //Here we store the original value of the stack address (4000000) in EBP, and we restore it at the end of the function back to 4000000
SUB ESP, 8 //here we "allocate" 8 bytes in the stack, which basically just decreases the ESP addr by 8
so our ESP address was changed from
4000000
to
3999992
that's how the program knows knows the stack addresss for the first int is "3999992" and the second int is from 3999996 to 4000000
Even tho this pretty much has nothing to do with the compiler, it's really important to know because when you know how stack is "allocated", you realize how cheap it is to do things like
char my_array[20000];
since all it's doing is just doing sub esp, 20000 which is a single assembly instruction
but if u actually use all those bytes like memset(my_array,20000) that's a different history.
how does C++ then have access to this variable in memory?
It doesn't!
Your computer does, and it is instructed on how to do that by loading the location of the variable in memory into a register. This is all handled by assembly language. I shan't go into the details here of how such languages work (you can look it up!) but this is rather the purpose of a C++ compiler: to turn an abstract, high-level set of "instructions" into actual technical instructions that a computer can understand and execute. You could sort of say that assembly programs contain a lot of pointers, though most of them are literals rather than "variables".
I know C standards preceding C99 (as well as C++) says that the size of an array on stack must be known at compile time. But why is that? The array on stack is allocated at run-time. So why does the size matter in compile time? Hope someone explain to me what a compiler will do with size at compile time. Thanks.
The example of such an array is:
void func()
{
/*Here "array" is a local variable on stack, its space is allocated
*at run-time. Why does the compiler need know its size at compile-time?
*/
int array[10];
}
To understand why variably-sized arrays are more complicated to implement, you need to know a little about how automatic storage duration ("local") variables are usually implemented.
Local variables tend to be stored on the runtime stack. The stack is basically a large array of memory, which is sequentially allocated to local variables and with a single index pointing to the current "high water mark". This index is the stack pointer.
When a function is entered, the stack pointer is moved in one direction to allocate memory on the stack for local variables; when the function exits, the stack pointer is moved back in the other direction, to deallocate them.
This means that the actual location of local variables in memory is defined only with reference to the value of the stack pointer at function entry1. The code in a function must access local variables via an offset from the stack pointer. The exact offsets to be used depend upon the size of the local variables.
Now, when all the local variables have a size that is fixed at compile-time, these offsets from the stack pointer are also fixed - so they can be coded directly into the instructions that the compiler emits. For example, in this function:
void foo(void)
{
int a;
char b[10];
int c;
a might be accessed as STACK_POINTER + 0, b might be accessed as STACK_POINTER + 4, and c might be accessed as STACK_POINTER + 14.
However, when you introduce a variably-sized array, these offsets can no longer be computed at compile-time; some of them will vary depending upon the size that the array has on this invocation of the function. This makes things significantly more complicated for compiler writers, because they must now write code that accesses STACK_POINTER + N - and since N itself varies, it must also be stored somewhere. Often this means doing two accesses - one to STACK_POINTER + <constant> to load N, then another to load or store the actual local variable of interest.
1. In fact, "the value of the stack pointer at function entry" is such a useful value to have around, that it has a name of its own - the frame pointer - and many CPUs provide a separate register dedicated to storing the frame pointer. In practice, it is usually the frame pointer from which the location of local variables is calculated, rather than the stack pointer itself.
It is not an extremely complicated thing to support, so the reason C89 doesn't allow this is not because it was impossible back then.
There are however two important reasons why it is not in C89:
The runtime code will get less efficient if the array size is not known at compile-time.
Supporting this makes life harder for compiler writers.
Historically, it has been very important that C compilers should be (relatively) easy to write. Also, it should be possible to make compilers simple and small enough to run on modest computer systems (by 80s standards). Another important feature of C is that the generated code should consistently be extremely efficient, without any surprises,
I think it is unfortunate that these values no longer hold for C99.
The compiler has to generate the code to create the space for the frame on the stack to hold the array and other local local variables. For this it needs the size of the array.
Depends on how you allocate the array.
If you create it as a local variable, and specify a length, then it matters because the compiler needs to know how much space to allocate on the stack for the elements of the array. If you don't specify a size of the array, then it doesn't know how much space to set aside for the array elements.
If you create just a pointer to an array, then all you need to do is allocate the space for the pointer itself, and then you can dynamically create array elements during run time. But in this form of array creation, you're allocating space for the array elements in the heap, not on the stack.
In C++ this becomes even more difficult to implement, because variables stored on the stack must have their destructors called in the event of an exception, or upon returning from a given function or scope. Keeping track of the exact number/size of variables to be destroyed adds additional overhead and complexity. While in C it's possible to use something like a frame pointer to make freeing of the VLA implicit, in C++ that doesn't help you, because those destructors need to be called.
Also, VLAs can cause Denial of Service security vulnerabilities. If the user is able to supply any value which is eventually used as the size for a VLA, then they could use a sufficiently large value to cause a stack overflow (and therefore failure) in your process.
Finally, C++ already has a safe and effective variable length array (std::vector<t>), so there's little reason to implement this feature for C++ code.
Let's say you create a variable sized array on the stack. The size of the stack frame needed by the function won't be known at compile time. So, C assumed that some run time environments require this to be known up front. Hence the limitation. C goes back to the early 1970's. Many languages at the time had a "static" look and feel (like Fortran)