Pointers pointing to same memory location but different program - c++

I've written two programs, one (p1.cpp) that prints the value and address of a variable every 1 second..
// p1.cpp
int main() {
int x = 13;
int *p = &x;
while (true) {
cout << " value of x: " << *p << " addr: " << p << endl;
sleep(1);
}
}
and the other (p2.cpp), in which I manually point a pointer to the location printed out by p1.cpp and change the value.
//p2.cpp
int main() {
int *p = (int*)0x61ff08; // this is manually set and compiled.
cout << "value of p from p2.cpp : " << *p << endl;
*p = 10;
}
However, upon running p1.cpp, setting the location and running p2.cpp, the value in first program doesn't seem to change. In fact, p2.cpp shows some garbage value if I display the contents of p.
output of p1.cpp
output of p2.cpp
I would like to know why is this happening and why the value of x isn't changed by the pointer of another program.
Thanks!

In modern operating systems like linux, windows or MacOs each process has its own virtual memory address space.
Therefore the memory address from the process of your program p1 has nothing to do with the memory of the process of your program p2.
If you really want to access memory between processes directly you need to use shared memory.
But what is your intention? Do you just want to play around, or do you want communication between processes? In the later case you should read about IPC - inter process communication. There are a lot of IPC mechanisms you can use like named pipes, sockets or shared memory, depending on what you want to achieve.
You may have a look at this article for first introduction into the topic: https://en.wikipedia.org/wiki/Inter-process_communication

Related

C++, how to access with a pointer to a specific address

I was curious, so I tried to do this:
First program:
#include <iostream>
using namespace std;
int a = 5;
int main()
{
cout << "Value of a: " << a << endl;
cout << "Address of a: " << &a << endl;
system("pause");
return 0;
}
Output:
Value of a: 5
Address of a: 0x472010
Second program:
#include <iostream>
using namespace std;
int* p = reinterpret_cast<int*>(0x472010);
int main()
{
cout << "Value of p: " << *p << endl;
cout << "The address that the pointer points to: " << p << endl;
cout << endl;
system("pause");
return 0;
}
So, I want to read the value of the variable 'a' with
the 'p' pointer belonging to the another program, and the 'p' pointer point to specific address.
everything is fine until i run the second pragram, as the second program doesn't give the desired results
Output second program:
Value of p: 4661264
The address that the pointer points to: 0x472010
The result does not change if I keep the window of the first program open.
I promise I’m a beginner and I’m trying new things
What am I doing wrong?
You can't directly access another process's memory in the manner you are attempting. Each process runs in its own address space. Your 2nd process is trying to access an (invalid) address within its own address space, not within the address space of the 1st process.
On Windows, to read another process's memory, your 2nd process must obtain a HANDLE to the 1st process, such as from OpenProcess(), and then must use ReadProcessMemory() to read memory from an address within the 1st process.
The alternative is for the 1st process to allocate a block of shared memory which the 2nd process can access directly. On Windows, you can use CreateFileMapping() and MapViewOfFile() for that purpose (see Creating Named Shared Memory on MSDN). On 'Nix systems, you can use shm_open() mmap().

Beginner quesiton memory allocation c++ [duplicate]

This question already has answers here:
What are the differences between virtual memory and physical memory?
(6 answers)
Closed 1 year ago.
I'am currently learning c++. During some heap allocation exercises I tried to generate a bad allocation. My physical memory is about 38GB. Why is it possible to allocate such a high amount of memory? Is my basic calculation of bytes wrong? I don't get it. Can anyone give me a hint please? Thx.
#include <iostream>
int main(int argc, char **argv){
const size_t MAXLOOPS {1'000'000'000};
const size_t NUMINTS {2'000'000'000};
int* p_memory {nullptr};
std::cout << "Starting program heap_overflow.cpp" << std::endl;
std::cout << "Max Loops: " << MAXLOOPS << std::endl;
std::cout << "Number of Int per allocation: " << NUMINTS << std::endl;
for(size_t loop=0; loop<MAXLOOPS; ++loop){
std::cout << "Trying to allocate new heap in loop " << loop
<< ". current allocated mem = " << (NUMINTS * loop * sizeof(int))
<< " Bytes." << std::endl;
p_memory = new (std::nothrow) int[NUMINTS];
if (nullptr != p_memory)
std::cout << "Mem Allocation ok." << std::endl;
else {
std::cout << "Mem Allocation FAILED!." << std::endl;
break;
}
}
return 0;
}
Output:
...
Trying to allocate new heap in loop 17590. current allocated mem = 140720000000000 Bytes.
Mem Allocation ok.
Trying to allocate new heap in loop 17591. current allocated mem = 140728000000000 Bytes.
Mem Allocation FAILED!.
Many (but not all) virtual-memory-capable operating systems use a concept known as demand-paging - when you allocate memory, you perform bookkeeping allowing you to use that memory. However, you do not reserve actual pages of physical memory at that time.1
When you actually attempt to read or write to any byte within a page of that allocated memory, a page fault occurs. The fault handler detects that the page has been pre-allocated but not demand-paged in. It then reserves a page of physical memory, and sets up the PTE before returning control to the program.
If you attempt to write into the memory you allocate right after each allocation, you may find that you run out of physical memory much faster.
Notes:
1 It is possible to have an OS implementation that supports virtual memory but immediately allocates physical memory to back virtual allocations; virtual memory is a necessary, but not sufficient condition, to replicate your experiment.
One comment mentions swapping to disk. This is likely a red herring - the pagefile size is typically comparable in size to memory, and the total allocation was around 140 TB which is much larger than individual disks. It's also ineffective to page-to-disk empty, untouched pages.

Is this a double free in C++

I thought the following code snippets would cause double free, and the program would core dump. But the truth is that there is no error when I run the code?
Similar problem shows that it caused double free!
My Question is why does there have no error show that there is a double free? And why does there have no core dump?
#include <iostream>
using namespace std;
int main()
{
int *p = new int(5);
cout << "The value that p points to: " << (*p) << endl;
cout << "The address that p points to: " << &(*p) << endl;
delete p;
cout << "The value that p points to: " << (*p) << endl;
cout << "The address that p points to: " << &(*p) << endl;
delete p;
cout << "The value that p points to: " << (*p) << endl;
cout << "The address that p points to: " << &(*p) << endl;
delete p;
}
The program's output when I ran this program is shown as followed:
After modifying the code snippet like the following, the core dump occured:
#include <iostream>
using namespace std;
int main()
{
int *p = new int(5);
for (;;)
{
cout << "The value that p points to: " << (*p) << endl;
cout << "The address that p points to: " << &(*p) << endl;
delete p;
}
return 0;
}
And the program output is :
So there is another question that why this program will core dump every time?
Yes, it is a double free (well, triple, really) which puts it into undefined behaviour territory.
But that's the insidious thing about undefined behaviour, it's not required to crash or complain, it's not required to do anything at all(a). It may even work.
I can envisage an implementation that stores the free state of a block in the control information for it so that freeing it twice would have no effect. However, that would be inefficient, and also wouldn't cover the case where it had been reallocated for another purpose (it would prevent double frees, but not a piece of code freeing the block when some other piece still thinks it still has it).
So, given it's not required to work, you would be well advised to steer clear of it since it may also download maniacal_laughter.ogg and play it while erasing your primary drive.
As an aside, modern C++ has smart pointers that are able to manage their own lifetime, and you would be doing yourself a big favour if you started using those instead of raw pointers.And, although the removal of raw pointer from C++ was a joke, there are some that think it's not such a bad idea :-)
(a) The C++20 standard has this to say when describing undefined behaviour in [defns.undefined] (my emphasis):
Behavior for which this document imposes **NO** requirements.
why does there have no error show that there is a double free? And why does there have no core dump?
delete p;
cout << "The value that p points to: " << (*p) << endl;
The moment you referenced to a deleted pointer is when the program entered an undefined behaviour, and then there is no guarantee that there would be an error or a crash.
It's not entirely the same, but the analogy between memory and a hotel room is applicable, which explains well what an undefined behaviour means. Highly recommended reading:
Can a local variable's memory be accessed outside its scope?

Stack and Heap address region is differs in Windows and linux

I'm now testing the address area of heap and stack in C++
my code is
#include <iostream>
using namespace std;
int g;
int uninitialized_g;
class Heap{
int a;
int b;
};
int main() {
int stack_variable = 3;
int stack_variable_1 = 3;
g = 3;
Heap * heap_class = new Heap;
Heap * heap_class_1 = new Heap;
cout << "Static initialized g's addr = " << &g << endl;
cout << "Static un-initialized g's addr = " << &uninitialized_g << endl;
cout << "Stack stack_variable's addr = " << &stack_variable << endl;
cout << "Stack stack_variable1's addr = " << &stack_variable_1 << endl;
cout << "Heap heap_class's addr = " << heap_class << endl;
cout << "Heap heap_class1's addr = " << heap_class_1 << endl;
delete heap_class;
delete heap_class_1;
return 0;
}
and in windows eclipse with MinGW, the result is
Static initialized g's addr = 0x407020
Static un-initialized g's addr = 0x407024
Stack stack_variable's addr = 0x22fed4
Stack stack_variable1's addr = 0x22fed0
Heap heap_class's addr = 0x3214b0
Heap heap_class1's addr = 0x3214c0
and in linux with g++ result is
Static initialized g's addr = 0x601180
Static un-initialized g's addr = 0x601184
Stack stack_variable's addr = 0x7ffff5c8c2c8
Stack stack_variable1's addr = 0x7ffff5c8c2cc
Heap heap_class's addr = 0x1c7c010
Heap heap_class1's addr = 0x1c7c030
which make sense to me.
So, the questions are,
In windows result, why is heap memory address allocated sometimes higher than stack?
In linux, heap addressing makes sense. But why stack address grows higher?
Thanks in advance.
Your program runs in an environment called operating system. So there is more code in action as you probably expected.
1) Stack & Heap
The stack address of the first thread is defined by the operating system. You might set some values in the PE32 exe file header that request some specific value. But this is at least different on Linux.
The C runtime library requests the operating system for some memory. IIRC with the function s_brk. The operating system can provide memory as it likes. Keep in mind that despite you have a linear address space you don't have a contiguous memory layout. It more reminds to a swiss cheese.
2) Addresses of local variables
This is a not specified behavior. It is free to the compiler to assignd the order in memory for the local variables. Sometimes I have seen that the order is alphabetical (just try a rename) or that it changes with the level of optmization. Just accept it.

Maximum memory that can be allocated dynamically and at compile time in c++

I am playing around to understand how much memory can be allocated. Initially I thought that the maximum memory which can be allocated is equal to Physical memory (RAM). I checked my RAM on Ubuntu 12.04 by running the command as shown below:
~$ free -b
total used free shared buffers cached
Mem: 3170848768 2526740480 644108288 0 265547776 1360060416
-/+ buffers/cache: 901132288 2269716480
Swap: 2428497920 0 2428497920
As shown above,total physical memory is 3Gig (3170848768 bytes) out of which only 644108288 bytes is free, so I assumed I can at max allocate only this much memory. I tested it by writing the small program with only two lines below:
char * p1 = new char[644108290] ;
delete p1;
Since code ran perfectly , it means it allocated memory successfully. Also I tried to allocate the memory greater than the available physical free memory still it did not throw any error. Then per question
maximum memory which malloc can allocate
I thought it must be using the virtual memory.So I tested the code for free swap memory and it also worked.
char * p1 = new char[2428497920] ;
delete p1;
The I tried to allocate the free swap plus free RAM bytes of memory
char * p1 = new char[3072606208] ;
delete p1;
But this time code failed throwing the bad_alloc exception.Why the code didn't work this time.
Now I allocated the memory at compile time in a new program as shown below:
char p[3072606208] ;
char p2[4072606208] ;
char p3[5072606208];
cout<<"Size of array p = " <<sizeof p <<endl;
cout<<"Size of array p2 = " <<sizeof p2<<endl;
cout<<"Size of array p2 = " <<sizeof p3;
The out put shows
Size of array p = 3072606208
Size of array p1 = 4072606208
Size of array p2 = 777638912
Could you please help me understand what is happening here. Why did it allowed the memory to be allocated at the compile time but not at dynamically.
When allocated compile time how come p and p1 were able to allocate memory greater than swap plus free RAM memory. Where as p2 failed.
How exactly is this working. Is this some undefined behaviour or os specific behaviour. Thanks for your help. I am using Ubuntu 12.04 and gcc 4.6.3.
Memory pages aren't actually mapped to your program until you use them. All malloc does is reserve a range of the virtual address space. No physical RAM is mapped to those virtual pages until you try to read or write them.
Even when you allocate global or stack ("automatic") memory, there's no mapping of physical pages until you touch them.
Finally, sizeof() is evaluated at compile time, when the compiler has no idea what the OS will do later. So it will just tell you the expected size of the object.
You'll find that things will behave very differently if you try to memset the memory to 0 in each of your cases. Also, you might want to try calloc, which zeroes its memory.
Interesting.... one thing to note: when you write
char p[1000];
you allocate (well, reserve) 100 bytes on the stack.
When you write
char* p = malloc(100);
you allocate 100 bytes on the heap. Big difference. Now I don't know why the stack allocations are working - unless the value between the [] is being read as an int by the compiler and is thus wrapping around to allocate a much smaller block.
Most OSs don't allocate physical memory anyway, they give you pages from a virtual address space which remains unused (and therefore unallocated) until you use them, then the memory-manager unit of the CPU will nip in to give you the memory you asked for. Try writing to those bytes you allocated and see what happens.
Also, on windows at least, when you allocate a block of memory, you can only reserve the largest contiguous block the OS has available - so as the memory gets fragmented by repeated allocations, the largest side block you can malloc reduces. I don't know if Linux has this problem too.
There's a huge difference between these two programs:
program1.cpp
int main () {
char p1[3072606208];
char p2[4072606208];
char p3[5072606208];
std::cout << "Size of array p1 = " << sizeof(p1) << std::endl;
std::cout << "Size of array p2 = " << sizeof(p2) << std::endl;
std::cout << "Size of array p3 = " << sizeof(p3) << std::endl;
}
program2.cpp:
char p1[3072606208];
char p2[4072606208];
char p3[5072606208];
int main () {
std::cout << "Size of array p1 = " << sizeof(p1) << std::endl;
std::cout << "Size of array p2 = " << sizeof(p2) << std::endl;
std::cout << "Size of array p3 = " << sizeof(p3) << std::endl;
}
The first allocates memory on the stack; it's going to get a segmentation fault due to stack overflow. The second doesn't do much at all. That memory doesn't quite exist yet. It's in the form of data segments that aren't touched. Let's modify the second program so that the data are touched:
char p1[3072606208];
char p2[4072606208];
char p3[5072606208];
int main () {
p1[3072606207] = 0;
p2[3072606207] = 0;
p3[3072606207] = 0;
std::cout << "Size of array p1 = " << sizeof(p1) << std::endl;
std::cout << "Size of array p2 = " << sizeof(p2) << std::endl;
std::cout << "Size of array p3 = " << sizeof(p3) << std::endl;
}
This doesn't allocate memory for p1, p2, or p3 on the heap or the stack. That memory lives in data segments. It's a part of the application itself. There's one big problem with this: On my machine, this version won't even link.
The first thing to note is that in modern computers is that processes do not get direct access to RAM (at the application level). Rather the OS will provide each process with a "virtual address space". The OS intercepts calls to access virtual memory reserves real memory as and when needed.
So when malloc or new says it's found enough memory for you, it just means that its found enough memory for you in the virtual address space. You can check this by running the following program with the memset line and with it commented out. (careful, this program uses a busy loop).
#include <iostream>
#include <new>
#include <string.h>
using namespace std;
int main(int argc, char** argv) {
size_t bytes = 0x7FFFFFFF;
size_t len = sizeof(char) * bytes;
cout << "len = " << len << endl;
char* arr = new char[len];
cout << "done new char[len]" << endl;
memset(arr, 0, len); // set all values in array to 0
cout << "done setting values" << endl;
while(1) {
// stops program exiting immediately
// press Ctrl-C to exit
}
return 0;
}
When memset is part of the program you will notice the memory used by your computer jumps massively, and without it you should barely notice any difference if any. When memset it called is accessed all the elements of the array, forcing the OS to make the space available in physical memory. Since the argument for new is a size_t (see here) then the maximum argument you can call it with is 2^32-1, though this isn't guaranteed to succeed (it certainly doesn't on my machine).
As for your stack allocations: David Hammem's answer says it better than I could. I am surprised you were able to compile those programs. Using the same setup as you (Ubuntu 12.04 and gcc 4.6) I get compile errors like:
test.cpp: In function ‘int main(int, char**)’:
test.cpp:14:6: error: size of variable ‘arr’ is too large
try the following code:
bool bExit = false;
unsigned int64 iAlloc = 0;
do{
char *test = NULL;
try{
test = new char[1]();
iAlloc++;
}catch(bad_alloc){
bExit = true;}
}while(!bExit);
char chBytes[130] = {0};
sprintf(&chBytes, "%d", iAlloc);
printf(&chBytes);
In one run don't open other programms, in the other run load a few large files in an application which use memory mapped files.
This may help you to understand.