Where is the memory address of a c/c++ pointer held? - c++

If I do the following:
int i, *p = &i;
int **p2p = &p;
I get the address of i (on the stack) and pass it to p, I then get the address of p (on the stack) and pass that to p2p.
My question is, we know the value of i is kept in the memory address p and so forth but how does the operating system know where that address is? I suppose their addresses are being kept organized in the stack. Is each declared variable (identifier) treated like an offset from the current position of the stack? What about global variables, how does the operating system and/or the compiler deal with addressing these during execution? How does the OS and the compiler work to 'remember' the addresses of each identifier without using memory? Are all variables just entered (pushed) in-order into the stack and their names are replaced with their offsets? if so, what about conditional code that can change the order of declaration?

I used to be an assembly language programmer, so I know the answer for the CPUs that I used to work with. The main point is that one of the CPU's registers is used as the stack pointer, called SP (or esp on x86 CPUs these days). The compiler references the variables (i, p and p2p in your case) relative to SP. In other words, the compiler decides what the offset of each variable should be from SP, and produces machine code accordingly.

Conceptually, data can be stored in 4 different areas of memory, depending on its scope and whether it's constant or variable. I say "conceptually" because memory allocation is very platform-dependent, and the strategy can become extremely complicated in order to wring out as much efficiency as modern architectures can provide.
It's also important to realize that, with few exceptions, the OS doesn't know or care where variables reside; the CPU does. It's the CPU that processes each operation in a program, calculates addresses, and reads and writes memory. In fact, the OS itself is just a program, with its own variables, that the CPU executes.
In general, the compiler decides which type of memory (e.g. stack, heap, register) to allocate for each variable. If it chooses a register, it also decides which register to allocate. If it chooses another type of memory, it calculates the variable's offset from the beginning of that section of memory. It creates an "object" file that still references these variables as offsets from the start of their sections.
Then the linker reads each of the object files, combines and sorts their variables into the appropriate sections, and then "fixes up" the offsets. (That's the technical term. Really.)
Constant data
What is it?
Since this data never changes it's typically stored alongside the program itself in an area of read-only memory. In an embedded system, like a microwave oven, this may be in (traditionally inexpensive) ROM instead of (more expensive) RAM. On a PC, it's a segment of RAM that's been designated as ready-only by the OS, so an attempt to write to it will cause a segmentation fault and stop the program before it "illegally" changes something it shouldn't.
How is it accessed?
The compiler typically references constant data as an offset from the beginning of the constant data segment. It's the linker that knows where the segment actually resides, so it fixes the starting address of the segment.
Global and static data
What is it?
This data must be available throughout the entire life of the running program, so it must reside on a "heap" of memory that's been allocated to the program. Since the data can change, the heap cannot reside in read-only memory as does constant data; it must reside in writable RAM.
How is it accessed?
The CPU accesses global and static data in the same way as constant data: it's referenced as an offset from the start of the heap, with the heap's starting address fixed by the linker.
Local data
What is it?
These are variables that exist only while an enclosing function is active. They reside in RAM that is allocated dynamically and then returned to the system immediately when the function exits. Conceptually, they're allocated from a "stack" that grows as functions are called and create variables; it shrinks as each function returns. The stack also holds the "return address" for each function call: the CPU records its current location in the program and "pushes" that address onto the stack before it calls a function; then, when the function returns, it "pops" the address off the stack so it can resume from wherever it was before the function call. But again, the actual implementation depends on the architecture; the important thing is to remember that a function's local data becomes invalid, and should therefore never be referenced, after the function returns.
How is it accessed?
Local data is accessed by its offset from the beginning of the stack. The compiler knows the next available stack address when it enters a function, and ignoring some esoteric cases, it also knows how much memory it needs for local variables, so it moves the "stack pointer" to skip over that memory. It then references each local variable by calculating its address within the stack.
Registers
What are they?
A register is a small area of memory within the CPU itself. All calculations occur within registers, and register operations are very fast. The CPU contains a relatively small number of registers, so they're a limited resource.
How are they accessed?
The CPU can access registers directly, which makes register operations very quick. The compiler may choose to allocate a register to a variable as an optimization, so it won't need to wait while it fetches or writes the data to RAM. Generally, only local data is assigned to registers. For example, a loop counter may reside in a register, and the stack pointer is itself a register.
The answer to your question:
When you declare a variable on the stack, the compiler calculates its size and assigns memory for it, beginning at the next available location on the stack. Let's look at your example, making the following assumptions:
1. When the function is called, SP is the next available address in the stack, which grows downward.
2. sizeof(int) = 2 (just to make it different from the size of a pointer).
3. sizeof(int *) = sizeof(int **) = 4 (that is, all pointers are the same size).
Then: int i, *p = &i;
int **p2p = &p;
You're declaring 3 variables:
i: Addr = SP, size = 2, contents = uninitialized
p: Addr = SP-2, size = 4, contents = SP (address of i)
p2p: Addr = SP-6, size = 4, contents = SP-2 (address of p)

The operating system is not concerned about the addresses your programs use. Whenever a system call is issued that needs to use a buffer within your address space, your program provides the address of the buffer.
Your compiler presents a stack frame for each of your functions.
push ebp
mov ebp,esp
Then, any function parameters or local variables can be addressed relative to the value of the EBP register which is then the base address of that stack frame. This is taken care of by the compiler via reference tables specific to your compiler.
Upon exiting the function, the compiler tears down the stack frame:
mov esp,ebp
pop ebp
At low level, the CPU only works with literal BYTE/WORD/DWORD/etc values and addresses (that are the same, but used differently).
A memory address that's needed is either stored in a named buffer (e.g. global var) that the compiler substitutes with its known address at compile time or in a register of the CPU (quite simplified, but still true)
Being into OS development, I'd gladly explain anything I know in more depth if you like, but that's for sure out of scope for SOF so we need to find another channel if you're interested.

the value of i is kept in the memory address p and so forth but how does the operating system know where that address is?
The OS doesn't know nor care where the variables are.
I suppose [variables'] addresses are being kept organized in the stack.
The stack does not organize variables' addresses. It simply contains/holds the values of the variables.
Is each declared variable (identifier) treated like an offset from the current position of the stack?
That may indeed hold true for some local variables. However, optimization can either move variables into CPU registers or eliminate them altogether.
What about global variables, how does the operating system and/or the compiler deal with addressing these during execution?
The compiler does not deal with the variables when the program has already been compiled. It has finished its job.
How does the OS and the compiler work to 'remember' the addresses of each identifier without using memory?
The OS does not remember any of that. It doesn't even know anything about your program's variables. To the OS your program is just a collection of somewhat amorphous code and data. Names of variables are meaningless and rarely available in compiled programs. They are only needed for programmers and compilers. Neither the CPU nor the OS needs them.
Are all variables just entered (pushed) in-order into the stack and their names are replaced with their offsets?
That would be a reasonable simplified model for local variables.
if so, what about conditional code that can change the order of declaration?
That's what the compiler has to deal with. Once the program's compiled, all has been taken care of.

as #Stochastically explained:
The compiler references the variables (i, p and p2p in your case)
relative to SP. In other words, the compiler decides what the offset
of each variable should be from SP, and produces machine code
accordingly.
maybe this example explains it additionally to you. It is on amd64, thus size of pointer is 8 bytes. As you can see there are no variables, only offsets from register.
#include <cstdlib>
#include <stdio.h>
using namespace std;
/*
*
*/
int main(int argc, char** argv) {
int i, *p = &i;
int **p2p = &p;
printf("address 0f i: %p",p);//0x7fff4d24ae8c
return 0;
}
disassembly:
!int main(int argc, char** argv) {
main(int, char**)+0: push %rbp
main(int, char**)+1: mov %rsp,%rbp
main(int, char**)+4: sub $0x30,%rsp
main(int, char**)+8: mov %edi,-0x24(%rbp)
main(int, char**)+11: mov %rsi,-0x30(%rbp)
!
! int i, *p = &i;
main(int, char**)+15: lea -0x4(%rbp),%rax
main(int, char**)+19: mov %rax,-0x10(%rbp) //8(pointer)+4(int)=12=0x10-0x4
! int **p2p = &p;
main(int, char**)+23: lea -0x10(%rbp),%rax
main(int, char**)+27: mov %rax,-0x18(%rbp) //8(pointer)
! printf("address 0f i: %p",p);//0x7fff4d24ae8c
main(int, char**)+31: mov -0x10(%rbp),%rax //this is pointer
main(int, char**)+35: mov %rax,%rsi //get address of variable, value would be %esi
main(int, char**)+38: mov $0x4006fc,%edi
main(int, char**)+43: mov $0x0,%eax
main(int, char**)+48: callq 0x4004c0 <printf#plt>
! return 0;
main(int, char**)+53: mov $0x0,%eax
!}
main(int, char**)()
main(int, char**)+58: leaveq
main(int, char**)+59: retq

Related

Addressing stack variables

As far as I understand, stack variables are stored using an absolute offset to the stack frame pointer.
But how are those variables addressed later?
Consider the following code:
#include <iostream>
int main()
{
int a = 0;
int b = 1;
int c = 2;
std::cout << b << std::endl;
}
How does the compiler know where to find b? Does it store its offset to the stack frame pointer? And if so, where is this information stored? And does that mean that int needs more than 4 bytes to be stored?
The location (relative to the stack pointer) of stack variables is a compile-time constant.
The compiler always knows how many things it's pushed to the stack since the beginning of the function and therefore the relative position of any one of them within the stack frame. (Unless you use alloca or VLAs1.)
On x86 this is usually achieved by addressing relative to the ebp or esp registers, which are typically used to represent the "beginning" and "end" of the stack frame. The offsets themselves don't need to be stored anywhere as they are built into the instruction as part of the addressing scheme.
Note that local variables are not always stored on the stack.
The compiler is free to put them wherever it wants, so long as it behaves as if it were allocated on the stack.
In particular, small objects like integers may simply stay in a register for the full duration of their lifespans (or until the compiler is forced to spill them onto the stack), constants may be stored in read-only memory, or any other optimization that the compiler deems fit.
Footnote 1: In functions that use alloca or a VLA, the compiler will use a separate register (like RBP in x86-64) as a "frame pointer" even in an optimized build, and address locals relative to the frame pointer, not the stack pointer. The amount of named C variables is known at compile time, so they can go at the top of the stack frame where the offset from them to the frame pointer is constant. Multiple VLAs can just work as pointers to space allocated as if by alloca. (That's one typical implementation strategy).

C++ how are variables accessed in memory?

When I create a new variable in a C++ program, eg a char:
char c = 'a';
how does C++ then have access to this variable in memory? I would imagine that it would need to store the memory location of the variable, but then that would require a pointer variable, and this pointer would again need to be accessed.
See the docs:
When a variable is declared, the memory needed to store its value is
assigned a specific location in memory (its memory address).
Generally, C++ programs do not actively decide the exact memory
addresses where its variables are stored. Fortunately, that task is
left to the environment where the program is run - generally, an
operating system that decides the particular memory locations on
runtime. However, it may be useful for a program to be able to obtain
the address of a variable during runtime in order to access data cells
that are at a certain position relative to it.
You can also refer this article on Variables and Memory
The Stack
The stack is where local variables and function parameters reside. It
is called a stack because it follows the last-in, first-out principle.
As data is added or pushed to the stack, it grows, and when data is
removed or popped it shrinks. In reality, memory addresses are not
physically moved around every time data is pushed or popped from the
stack, instead the stack pointer, which as the name implies points to
the memory address at the top of the stack, moves up and down.
Everything below this address is considered to be on the stack and
usable, whereas everything above it is off the stack, and invalid.
This is all accomplished automatically by the operating system, and as
a result it is sometimes also called automatic memory. On the
extremely rare occasions that one needs to be able to explicitly
invoke this type of memory, the C++ key word auto can be used.
Normally, one declares variables on the stack like this:
void func () {
int i; float x[100];
...
}
Variables that are declared on the stack are only valid within the
scope of their declaration. That means when the function func() listed
above returns, i and x will no longer be accessible or valid.
There is another limitation to variables that are placed on the stack:
the operating system only allocates a certain amount of space to the
stack. As each part of a program that is being executed comes into
scope, the operating system allocates the appropriate amount of memory
that is required to hold all the local variables on the stack. If this
is greater than the amount of memory that the OS has allowed for the
total size of the stack, then the program will crash. While the
maximum size of the stack can sometimes be changed by compile time
parameters, it is usually fairly small, and nowhere near the total
amount of RAM available on a machine.
Assuming this is a local variable, then this variable is allocated on the stack - i.e. in the RAM. The compiler keeps track of the variable offset on the stack. In the basic scenario, in case any computation is then performed with the variable, it is moved to one of the processor's registers and the CPU performs the computation. Afterwards the result is returned back to the RAM. Modern processors keep whole stack frames in the registers and have multiple levels of registers, so it can get quite complex.
Please note the "c" name is no more mentioned in the binary (unless you have debugging symbols). The binary only then works with the memory locations. E.g. it would look like this (simple addition):
a = b + c
take value of memory offset 1 and put it in the register 1
take value of memory offset 2 and put in in the register 2
sum registers 1 and 2 and store the result in register 3
copy the register 3 to memory location 3
The binary doesn't know "a", "b" or "c". The compiler just said "a is in memory 1, b is in memory 2, c is in memory 3". And the CPU just blindly executes the commands the compiler has generated.
C++ itself (or, the compiler) would have access to this variable in terms of the program structure, represented as a data structure. Perhaps you're asking how other parts in the program would have access to it at run time.
The answer is that it varies. It can be stored either in a register, on the stack, on the heap, or in the data/bss sections (global/static variables), depending on its context and the platform it was compiled for: If you needed to pass it around by reference (or pointer) to other functions, then it would likely be stored on the stack. If you only need it in the context of your function, it would probably be handled in a register. If it's a member variable of an object on the heap, then it's on the heap, and you reference it by an offset into the object. If it's a global/static variable, then its address is determined once the program is fully loaded into memory.
C++ eventually compiles down to machine language, and often runs within the context of an operating system, so you might want to brush up a bit on Assembly basics, or even some OS principles, to better understand what's going on under the hood.
Lets say our program starts with a stack address of 4000000
When, you call a function, depending how much stack you use, it will "allocate it" like this
Let's say we have 2 ints (8bytes)
int function()
{
int a = 0;
int b = 0;
}
then whats gonna happen in assembly is
MOV EBP,ESP //Here we store the original value of the stack address (4000000) in EBP, and we restore it at the end of the function back to 4000000
SUB ESP, 8 //here we "allocate" 8 bytes in the stack, which basically just decreases the ESP addr by 8
so our ESP address was changed from
4000000
to
3999992
that's how the program knows knows the stack addresss for the first int is "3999992" and the second int is from 3999996 to 4000000
Even tho this pretty much has nothing to do with the compiler, it's really important to know because when you know how stack is "allocated", you realize how cheap it is to do things like
char my_array[20000];
since all it's doing is just doing sub esp, 20000 which is a single assembly instruction
but if u actually use all those bytes like memset(my_array,20000) that's a different history.
how does C++ then have access to this variable in memory?
It doesn't!
Your computer does, and it is instructed on how to do that by loading the location of the variable in memory into a register. This is all handled by assembly language. I shan't go into the details here of how such languages work (you can look it up!) but this is rather the purpose of a C++ compiler: to turn an abstract, high-level set of "instructions" into actual technical instructions that a computer can understand and execute. You could sort of say that assembly programs contain a lot of pointers, though most of them are literals rather than "variables".

How an CPU finds location of a variable

I am working with C++(Sorry,If my question is little bit confusing).I know how pointer works.It points to an address of a variable.
My Question is that if I have created a simple variable(not a pointer) in stack or heap.How does CPU find the address of a variable as there is no pointer is present which points to that variable.It is just a name of an address in Memory.
for e.g
int main()
{
int a=5;//Created a variable by allocating 4 bytes
return 0;
}
It created a variable but Question is how will CPU find the variable?
It seems that you have some concept misunderstanding. In every program, there is a memory area called stack where local variables are allocated. In most computer architectures, there is a register called stack pointer (rsp in the x86_64 architecture) which points at the top of the stack (which grows from higher memory addresses to lower addresses).
In execution time, the program code (generated by the compiler, and not the OS) uses this stack pointer as the base to allocate its local variables. So your code would move the number 5 itself to a location that's pointed by the current value of the stack pointer at the moment when main() was called, with an offset of 4 bytes (the location pointed by the current sp register is not known at all, because it is changing all the time with every function call).
How does CPU find the address of a variable as there is no pointer is present which points to that variable.It is just a name of an address in Memory.
This is not true. In fact it is the opposite of true.
In your executable, your compiler has written the address of the variable (or a relative offset from the current stack frame, anyway) and machine instructions that describe how to use it. The name is not present at all.
And that's how your CPU knows how to find the variable: the variable does not exist any more at runtime! It is an abstraction of C++, provided to you the programmer to make your life easier. But it bears little to no relationship to how the actual computer program actually works and how your CPU executes it.

What register points to the heap?

I just finished learning ARM architecture/assembly. If the SP register holds the address of the next memory location to put data into, what holds the address of the heap? For example in C++ if you declare an object on the heap (e.g. MyObj example = new MyObj();) what would the assembly look like, in the sense where would it know where example is?
Stack in this context is a lower level structure provided by OS/EABI. That's why there is a conventional register for that. However, heap is a higher level structure provided by OS. So managing and playing with it depends on the agreement with your app and OS. In assembly terms, you'll be using that heap with dereferencing some addresses through registers.
The SP register is normally used to track the current position within the stack. This means it pretty much needs to always point to the stack.
The same cannot be said for the heap. When you need to access a variable, the address of that variable will be stored either in a pointer or other memory reference from your app. At the time that address is needed, a register could be used to make the reference. But the details of which register is not only compiler dependent but is also likely to depend on which register is available after code is optimized from the same compiler.
The processor needs a special register for the stack pointer because sometimes (an interrupt or exception) the processor hardware must modify the SP directly, without executing any code. That's not necessary for the heap, so there is no need to use a special register to point to the heap. At runtime the OS decides where a particular chunk of code can store things on the heap, and any register can be used to hold that address.
In ARM's EABI, R13 (SP) points to the last pushed data on full descending stack. However it is not necessary to be pointing to that at all times, such code can be legal:
Considering r0 to be pointing to a valid memory location accessible by our program.
stmfd r0!, {r1-r12, sp, lr}
ldr r1, [r0]
mov r2, lr
sub lr, sp, #4
str sp, [r4]
add sp, lr, #4
ldmfd r0!, {r1-r12, sp, pc}
ofcourse it doesn't make sense, but it's only point was that if you can safely reload sp, lr and all the other callee saved registers, you can scratch them as much as you want while remembering to restore their values before returning to the caller.
Another point, Stack and heap may not necessarily be the same, heap is a higher level abstract for malloc/free type constructs whereas stack is used for saving callee registers when just 4 registers are not enough or used to pass function arguments, data structures and all you can imagine but stack is a bit harder to manage because you have to keep track of all the data by yourself instead of just allocating region to a pointer and then freeing it after you're done.
Usually depending on the program and environment you can use various hacks and stuff for optimisations such as knowingly corrupt some non-scratch registers and get away with it, however you will have to manage that in your caller function and it should be aware of the registers that will be scratched in subsequent function calls, hence EABI may only make sense when transferring control to another program or taking over, you can do what you like in your time on cpu, just make sure to leave it as clean as it was before you entered the place.

C++: Function variable declarations, how does it work internally?

This has been bothering me for a long time now: Lets say i have a function:
void test(){
int t1, t2, t3;
int t4 = 0;
int bigvar[10000];
// do something
}
How does the computer handle the memory allocations for the variables?
I've always thought that the variables space is saved in the .exe which the computer will then read, is this correct? But as far as i know, the bigvar array doesnt take 10000 int elements space in the .exe, since its uninitialized. So how does its memory allocation work when i call the function ?
Local variables like those are generally implemented using the processor's stack. That means that the only thing that the compiler needs to do is to compute the size of each variable, and add them together. The total sum is the amount to change the stack pointer with at the entry to the function, and to change back on exit. Each variable is then accessed with its relative offset into that block of memory on the stack.
Your code, when compiled in Linux, ends up looking like this in x86 assembler:
test:
pushl %ebp
movl %esp, %ebp
subl $40016, %esp
movl $0, -4(%ebp)
leave
ret
In the above, the constant $40016 is the space needed for the four 32-bit ints t1, t2, t3 and t4, while the remaining 40000 bytes account for the 10000-element array bigvar.
I can't add much to what have already been said, except for a few notes. You can actually put local variables into executable file and have them allocated in the data segment (and initialized) instead of the stack segment. To do that, declare them as static. But then all the invocations of the function would share the same variables while in stack each invocation creates a new set of variables. This can lead to a lot of troubles when the function is called simultaneously by several thread or when there is a recursion (try to imagine that). That's why most languages use stack for local variables and static is rarely used.
On some old compilers, I've met the behavior that the array is statically allocated. Meaning that it sets aside memory for it when it loads the program, and uses that space after that. This behavior is not safe (See Sergey's answer), nor do I expect it to be permitted according to the standards, but I have encountered it in the wild. (I have no memory of what compiler it was.)
For the most part, local variables are kept on the stack, together with return addresses and all that other stuff. This means the uninitialized values may contain sensitive information. This also includes arrays, as per unwind's answer.
Another valid implementation is that the variable found on the stack is a pointer, and that the compiler does the allocation and deallocation (presumably in an exception-safe manner) under the hood. This will conserve stack space (which has to be allocated before the program starts, and cannot easily be extended for x86 architectures) and is also quite useful for C standard VLA (variable length array, aka poor mans std::vector)