Stack and Heap address region is differs in Windows and linux - c++

I'm now testing the address area of heap and stack in C++
my code is
#include <iostream>
using namespace std;
int g;
int uninitialized_g;
class Heap{
int a;
int b;
};
int main() {
int stack_variable = 3;
int stack_variable_1 = 3;
g = 3;
Heap * heap_class = new Heap;
Heap * heap_class_1 = new Heap;
cout << "Static initialized g's addr = " << &g << endl;
cout << "Static un-initialized g's addr = " << &uninitialized_g << endl;
cout << "Stack stack_variable's addr = " << &stack_variable << endl;
cout << "Stack stack_variable1's addr = " << &stack_variable_1 << endl;
cout << "Heap heap_class's addr = " << heap_class << endl;
cout << "Heap heap_class1's addr = " << heap_class_1 << endl;
delete heap_class;
delete heap_class_1;
return 0;
}
and in windows eclipse with MinGW, the result is
Static initialized g's addr = 0x407020
Static un-initialized g's addr = 0x407024
Stack stack_variable's addr = 0x22fed4
Stack stack_variable1's addr = 0x22fed0
Heap heap_class's addr = 0x3214b0
Heap heap_class1's addr = 0x3214c0
and in linux with g++ result is
Static initialized g's addr = 0x601180
Static un-initialized g's addr = 0x601184
Stack stack_variable's addr = 0x7ffff5c8c2c8
Stack stack_variable1's addr = 0x7ffff5c8c2cc
Heap heap_class's addr = 0x1c7c010
Heap heap_class1's addr = 0x1c7c030
which make sense to me.
So, the questions are,
In windows result, why is heap memory address allocated sometimes higher than stack?
In linux, heap addressing makes sense. But why stack address grows higher?
Thanks in advance.

Your program runs in an environment called operating system. So there is more code in action as you probably expected.
1) Stack & Heap
The stack address of the first thread is defined by the operating system. You might set some values in the PE32 exe file header that request some specific value. But this is at least different on Linux.
The C runtime library requests the operating system for some memory. IIRC with the function s_brk. The operating system can provide memory as it likes. Keep in mind that despite you have a linear address space you don't have a contiguous memory layout. It more reminds to a swiss cheese.
2) Addresses of local variables
This is a not specified behavior. It is free to the compiler to assignd the order in memory for the local variables. Sometimes I have seen that the order is alphabetical (just try a rename) or that it changes with the level of optmization. Just accept it.

Related

Beginner quesiton memory allocation c++ [duplicate]

This question already has answers here:
What are the differences between virtual memory and physical memory?
(6 answers)
Closed 1 year ago.
I'am currently learning c++. During some heap allocation exercises I tried to generate a bad allocation. My physical memory is about 38GB. Why is it possible to allocate such a high amount of memory? Is my basic calculation of bytes wrong? I don't get it. Can anyone give me a hint please? Thx.
#include <iostream>
int main(int argc, char **argv){
const size_t MAXLOOPS {1'000'000'000};
const size_t NUMINTS {2'000'000'000};
int* p_memory {nullptr};
std::cout << "Starting program heap_overflow.cpp" << std::endl;
std::cout << "Max Loops: " << MAXLOOPS << std::endl;
std::cout << "Number of Int per allocation: " << NUMINTS << std::endl;
for(size_t loop=0; loop<MAXLOOPS; ++loop){
std::cout << "Trying to allocate new heap in loop " << loop
<< ". current allocated mem = " << (NUMINTS * loop * sizeof(int))
<< " Bytes." << std::endl;
p_memory = new (std::nothrow) int[NUMINTS];
if (nullptr != p_memory)
std::cout << "Mem Allocation ok." << std::endl;
else {
std::cout << "Mem Allocation FAILED!." << std::endl;
break;
}
}
return 0;
}
Output:
...
Trying to allocate new heap in loop 17590. current allocated mem = 140720000000000 Bytes.
Mem Allocation ok.
Trying to allocate new heap in loop 17591. current allocated mem = 140728000000000 Bytes.
Mem Allocation FAILED!.
Many (but not all) virtual-memory-capable operating systems use a concept known as demand-paging - when you allocate memory, you perform bookkeeping allowing you to use that memory. However, you do not reserve actual pages of physical memory at that time.1
When you actually attempt to read or write to any byte within a page of that allocated memory, a page fault occurs. The fault handler detects that the page has been pre-allocated but not demand-paged in. It then reserves a page of physical memory, and sets up the PTE before returning control to the program.
If you attempt to write into the memory you allocate right after each allocation, you may find that you run out of physical memory much faster.
Notes:
1 It is possible to have an OS implementation that supports virtual memory but immediately allocates physical memory to back virtual allocations; virtual memory is a necessary, but not sufficient condition, to replicate your experiment.
One comment mentions swapping to disk. This is likely a red herring - the pagefile size is typically comparable in size to memory, and the total allocation was around 140 TB which is much larger than individual disks. It's also ineffective to page-to-disk empty, untouched pages.

Pointers pointing to same memory location but different program

I've written two programs, one (p1.cpp) that prints the value and address of a variable every 1 second..
// p1.cpp
int main() {
int x = 13;
int *p = &x;
while (true) {
cout << " value of x: " << *p << " addr: " << p << endl;
sleep(1);
}
}
and the other (p2.cpp), in which I manually point a pointer to the location printed out by p1.cpp and change the value.
//p2.cpp
int main() {
int *p = (int*)0x61ff08; // this is manually set and compiled.
cout << "value of p from p2.cpp : " << *p << endl;
*p = 10;
}
However, upon running p1.cpp, setting the location and running p2.cpp, the value in first program doesn't seem to change. In fact, p2.cpp shows some garbage value if I display the contents of p.
output of p1.cpp
output of p2.cpp
I would like to know why is this happening and why the value of x isn't changed by the pointer of another program.
Thanks!
In modern operating systems like linux, windows or MacOs each process has its own virtual memory address space.
Therefore the memory address from the process of your program p1 has nothing to do with the memory of the process of your program p2.
If you really want to access memory between processes directly you need to use shared memory.
But what is your intention? Do you just want to play around, or do you want communication between processes? In the later case you should read about IPC - inter process communication. There are a lot of IPC mechanisms you can use like named pipes, sockets or shared memory, depending on what you want to achieve.
You may have a look at this article for first introduction into the topic: https://en.wikipedia.org/wiki/Inter-process_communication

Is it possible to get the address of memory in under memory?

I know that we can read the location/position of a variable and its value in the memory but I want to go more deeper to see where is this memory address located if it is possible.
In my case 0x61fe09 is the memory address for a and what is the memory address for 0x61fe09.
code:
#include <iostream>
using namespace std;
int main()
{
int a = 42;
int* adress_of_a = &a;
int** adress_of_adress_of_a = &adress_of_a;
cout << " a = " << a << " at memory address = " << &adress_of_adress_of_a << '\n';
return 0;
}
There's no memory address for &a because it's not stored in memory.
You could store it in memory like so:
int* pointer_to_a = &a;
Now you can print &pointer_to_a to see the address where you stored &a.

Placement new and aligning for possible offset memory

I've been reading up on placement new, and I'm not sure if I'm "getting" it fully or not when it comes to proper alignment.
I've written the following test program to attempt to allocate some memory to an aligned spot:
#include <iostream>
#include <cstdint>
using namespace std;
unsigned char* mem = nullptr;
struct A
{
double d;
char c[5];
};
struct B
{
float f;
int a;
char c[2];
double d;
};
void InitMemory()
{
mem = new unsigned char[1024];
}
int main() {
// your code goes here
InitMemory();
//512 byte blocks to write structs A and B to, purposefully misaligned
unsigned char* memoryBlockForStructA = mem + 1;
unsigned char* memoryBlockForStructB = mem + 512;
unsigned char* firstAInMemory = (unsigned char*)(uintptr_t(memoryBlockForStructA) + uintptr_t(alignof(A) - 1) & ~uintptr_t(alignof(A) - 1));
A* firstA = new(firstAInMemory) A();
A* secondA = new(firstA + 1) A();
A* thirdA = new(firstA + 2) A();
cout << "Alignment of A Block: " << endl;
cout << "Memory Start: " << (void*)&(*memoryBlockForStructA) << endl;
cout << "Starting Address of firstA: " << (void*)&(*firstA) << endl;
cout << "Starting Address of secondA: " << (void*)&(*secondA) << endl;
cout << "Starting Address of thirdA: " << (void*)&(*thirdA) << endl;
cout << "Sizeof(A): " << sizeof(A) << endl << "Alignof(A): " << alignof(A) << endl;
return 0;
}
Output:
Alignment of A Block:
Memory Start: 0x563fe1239c21
Starting Address of firstA: 0x563fe1239c28
Starting Address of secondA: 0x563fe1239c38
Starting Address of thirdA: 0x563fe1239c48
Sizeof(A): 16
Alignof(A): 8
The output appears to be valid, but I still have some questions about it.
Some questions I have are:
Will fourthA, fifthA, etc... all be aligned as well?
Is there a simpler way of finding a properly aligned memory location?
In the case of struct B, it is set up to not be memory friendly. Do I need to reconstruct it so that the largest members are at the top of the struct, and the smallest members are at the bottom? Or will the compiler automatically pad everything so that it's member d will not be malaligned?
Will fourthA, fifthA, etc... all be aligned as well?
yes if the alignement of a type is a multiple of the size
witch is (i think) always the case
Is there a simpler way of finding a properly aligned memory location?
yes
http://en.cppreference.com/w/cpp/language/alignas
or
http://en.cppreference.com/w/cpp/memory/align
as Dan M said.
In the case of struct B, it is set up to not be memory friendly. Do I need to reconstruct it so that the largest members are at the top of the struct, and the smallest members are at the bottom? Or will the compiler automatically pad everything so that it's member d will not be malaligned?
you should reorganize if you think about it.
i don't think compiler will reorganize element in a struct for you.
because often when interpreting raw data (coming from file, network ...) this data is often just interpreted as a struct and 2 compiler reorganizing differently could break code.
I hope my explanation are clear and that I did not make any mistakes

Maximum memory that can be allocated dynamically and at compile time in c++

I am playing around to understand how much memory can be allocated. Initially I thought that the maximum memory which can be allocated is equal to Physical memory (RAM). I checked my RAM on Ubuntu 12.04 by running the command as shown below:
~$ free -b
total used free shared buffers cached
Mem: 3170848768 2526740480 644108288 0 265547776 1360060416
-/+ buffers/cache: 901132288 2269716480
Swap: 2428497920 0 2428497920
As shown above,total physical memory is 3Gig (3170848768 bytes) out of which only 644108288 bytes is free, so I assumed I can at max allocate only this much memory. I tested it by writing the small program with only two lines below:
char * p1 = new char[644108290] ;
delete p1;
Since code ran perfectly , it means it allocated memory successfully. Also I tried to allocate the memory greater than the available physical free memory still it did not throw any error. Then per question
maximum memory which malloc can allocate
I thought it must be using the virtual memory.So I tested the code for free swap memory and it also worked.
char * p1 = new char[2428497920] ;
delete p1;
The I tried to allocate the free swap plus free RAM bytes of memory
char * p1 = new char[3072606208] ;
delete p1;
But this time code failed throwing the bad_alloc exception.Why the code didn't work this time.
Now I allocated the memory at compile time in a new program as shown below:
char p[3072606208] ;
char p2[4072606208] ;
char p3[5072606208];
cout<<"Size of array p = " <<sizeof p <<endl;
cout<<"Size of array p2 = " <<sizeof p2<<endl;
cout<<"Size of array p2 = " <<sizeof p3;
The out put shows
Size of array p = 3072606208
Size of array p1 = 4072606208
Size of array p2 = 777638912
Could you please help me understand what is happening here. Why did it allowed the memory to be allocated at the compile time but not at dynamically.
When allocated compile time how come p and p1 were able to allocate memory greater than swap plus free RAM memory. Where as p2 failed.
How exactly is this working. Is this some undefined behaviour or os specific behaviour. Thanks for your help. I am using Ubuntu 12.04 and gcc 4.6.3.
Memory pages aren't actually mapped to your program until you use them. All malloc does is reserve a range of the virtual address space. No physical RAM is mapped to those virtual pages until you try to read or write them.
Even when you allocate global or stack ("automatic") memory, there's no mapping of physical pages until you touch them.
Finally, sizeof() is evaluated at compile time, when the compiler has no idea what the OS will do later. So it will just tell you the expected size of the object.
You'll find that things will behave very differently if you try to memset the memory to 0 in each of your cases. Also, you might want to try calloc, which zeroes its memory.
Interesting.... one thing to note: when you write
char p[1000];
you allocate (well, reserve) 100 bytes on the stack.
When you write
char* p = malloc(100);
you allocate 100 bytes on the heap. Big difference. Now I don't know why the stack allocations are working - unless the value between the [] is being read as an int by the compiler and is thus wrapping around to allocate a much smaller block.
Most OSs don't allocate physical memory anyway, they give you pages from a virtual address space which remains unused (and therefore unallocated) until you use them, then the memory-manager unit of the CPU will nip in to give you the memory you asked for. Try writing to those bytes you allocated and see what happens.
Also, on windows at least, when you allocate a block of memory, you can only reserve the largest contiguous block the OS has available - so as the memory gets fragmented by repeated allocations, the largest side block you can malloc reduces. I don't know if Linux has this problem too.
There's a huge difference between these two programs:
program1.cpp
int main () {
char p1[3072606208];
char p2[4072606208];
char p3[5072606208];
std::cout << "Size of array p1 = " << sizeof(p1) << std::endl;
std::cout << "Size of array p2 = " << sizeof(p2) << std::endl;
std::cout << "Size of array p3 = " << sizeof(p3) << std::endl;
}
program2.cpp:
char p1[3072606208];
char p2[4072606208];
char p3[5072606208];
int main () {
std::cout << "Size of array p1 = " << sizeof(p1) << std::endl;
std::cout << "Size of array p2 = " << sizeof(p2) << std::endl;
std::cout << "Size of array p3 = " << sizeof(p3) << std::endl;
}
The first allocates memory on the stack; it's going to get a segmentation fault due to stack overflow. The second doesn't do much at all. That memory doesn't quite exist yet. It's in the form of data segments that aren't touched. Let's modify the second program so that the data are touched:
char p1[3072606208];
char p2[4072606208];
char p3[5072606208];
int main () {
p1[3072606207] = 0;
p2[3072606207] = 0;
p3[3072606207] = 0;
std::cout << "Size of array p1 = " << sizeof(p1) << std::endl;
std::cout << "Size of array p2 = " << sizeof(p2) << std::endl;
std::cout << "Size of array p3 = " << sizeof(p3) << std::endl;
}
This doesn't allocate memory for p1, p2, or p3 on the heap or the stack. That memory lives in data segments. It's a part of the application itself. There's one big problem with this: On my machine, this version won't even link.
The first thing to note is that in modern computers is that processes do not get direct access to RAM (at the application level). Rather the OS will provide each process with a "virtual address space". The OS intercepts calls to access virtual memory reserves real memory as and when needed.
So when malloc or new says it's found enough memory for you, it just means that its found enough memory for you in the virtual address space. You can check this by running the following program with the memset line and with it commented out. (careful, this program uses a busy loop).
#include <iostream>
#include <new>
#include <string.h>
using namespace std;
int main(int argc, char** argv) {
size_t bytes = 0x7FFFFFFF;
size_t len = sizeof(char) * bytes;
cout << "len = " << len << endl;
char* arr = new char[len];
cout << "done new char[len]" << endl;
memset(arr, 0, len); // set all values in array to 0
cout << "done setting values" << endl;
while(1) {
// stops program exiting immediately
// press Ctrl-C to exit
}
return 0;
}
When memset is part of the program you will notice the memory used by your computer jumps massively, and without it you should barely notice any difference if any. When memset it called is accessed all the elements of the array, forcing the OS to make the space available in physical memory. Since the argument for new is a size_t (see here) then the maximum argument you can call it with is 2^32-1, though this isn't guaranteed to succeed (it certainly doesn't on my machine).
As for your stack allocations: David Hammem's answer says it better than I could. I am surprised you were able to compile those programs. Using the same setup as you (Ubuntu 12.04 and gcc 4.6) I get compile errors like:
test.cpp: In function ‘int main(int, char**)’:
test.cpp:14:6: error: size of variable ‘arr’ is too large
try the following code:
bool bExit = false;
unsigned int64 iAlloc = 0;
do{
char *test = NULL;
try{
test = new char[1]();
iAlloc++;
}catch(bad_alloc){
bExit = true;}
}while(!bExit);
char chBytes[130] = {0};
sprintf(&chBytes, "%d", iAlloc);
printf(&chBytes);
In one run don't open other programms, in the other run load a few large files in an application which use memory mapped files.
This may help you to understand.