Buffer Overflow into a different exe's memory? Or onto csrss.exe from a remote desktop prog? - c++

Short, Question Form:
I did some googling but wasn't able to come up with the answer to this: is it possible to buffer overflow memory into another exe's memory? And/or, is it possible to overflow csrss.exe's memory from an exe running on a remote desktop session?
Longer Story - Here's Our Situation:
We've got a server with an always-running remote desktop session that has a 24/7 program running - a C++ .exe. To make things worse, the C++ exe was programmed using all sorts of unsafe memory operations (raw strcpy, sprintf, etc) You don't need to tell me how bad this is structurally - I completely agree.
Recently, our server's been having Blue Screen Of Death, and the dumpfile is indicating that csrss.exe is being terminated by our C++ exe (which will cause a BSOD, and csrss.exe is also responsible for managing remote desktop sessions.
So I wanted to know if anyone knew whether it was possible for one app to do a memory buffer overflow that overflowed onto another app's memory space, or whether it'd be possible for an app on a remote desktop session to do so onto csrss.exe?
Any help would be greatly appreciated!

Short answer no it is not.
Simplified explanation of why. Each program runs in it's own virtual address space. This virtual address space is controlled by the page table which is essentially a lookup table to map virtual addresses (the addresses in the pointers of the executable) onto physical memory addresses. When the OS switches to a task it hands the correct table to the cpu/core running the task. Any physical address not mentioned in this table will not be accessible from the program. Physical addresses belonging to another application should not appear in this table so it would be impossible to access memory belonging to another application. When a program misbehaves and accesses invalid memory location it will attempt to use virtual addresses not mentioned in the table. This will trigger an exception/fault on the cpu which is normally reported in windows as an "Access violation".
Of course the OS and the CPU can contain bugs so it is impossible to guarantee that it doesn't happen. But if your C++ program misbehaves then still most of the time this would be caught by the CPU and reported as an access violation and not result in a BSOD. If you do not see your C++ program generating access violations I would expect it to be much more likely that the problem is caused by faulty memory or a buggy driver (drivers run at a higher privilege and can do things normal programs can't).
I would say start with doing an extensive memory test with a program like memtest86. BTW if the server is a "real" server with ECC memory, faulty memory shouldn't be the problem as this should have been reported by the system.
Update
Doesn't matter how the memory access happens underflow, overflow, uninitialized pointer. The virtual address used is either mapped to a physical memory location reserved for the program or it is not mapped at all. BTW the checking is done by the CPU the OS only maintains the tables used to do the lookups.
However this doesn't mean every error by the program will be detected because as long as it is accessing addresses for which it was assigned memory the access is ok as far as the CPU is concerned. The heap manager in your program might think otherwise but has no way of detecting this. So even a buffer overflow at the end of the address space doesn't always cause an access violation because memory is assigned to the program in pages of atleast 4kB and the heap manager subdivides those pages into the smaller chunks the program asks it for. So your small 10 byte buffer can be at the start of such a page and writing a thousand bytes to it will be perfectly fine as far as the cpu is concerned. Because all that memory was setup for use by the program. However when your 10 byte buffer is at the end of the page and the next entry is not assigned to a physical address location an access violation will occur.

Related

Does Windows 10 protect you from accessing memory that another program is using?

The following C++ code works:
int *p = new int;
p[1000] = 12;
Meaning I access a memory location that is sizeof (int) * 1000 bytes away from p.
What I was thinking is that maybe Windows or any other program is currently using the memory location &p[1000] for something. And if I tired to set p[1000] to a new value, then another program or even Windows who might be using that location to hold some memory, might crash, because I changed an important variable of that program.
Since C++ doesn't forbid this, I was wondering if at least Windows has some sort of protection against a program using a memory location currently used by someone else.
On Windows (and all other modern consumer operating systems) writing to a memory address you don't own will not directly affect memory belonging to any other process.
However, the operating system might be using that memory to provide essential services to your program, or the address might not be valid at all, so overwriting an address you don't own could cause your program to crash or behave in an unexpected way, either immediately or at some unpredictable point in the future. Google "undefined behavior" for more discussion of why this is a Bad Thing.
In the case of Windows, I have a vague recollection that the GUI uses some user-mode shared memory (for efficiency) so if you are really unlucky then writing to the wrong address might cause other GUI programs to malfunction, or perhaps even the entire GUI to become unresponsive, which would look very similar to an operating system hang from the user's point of view. I don't think I've ever seen that happen, though, so perhaps my information is out of date, or there are protective mechanisms in place to make this scenario less likely. (This does not represent a security vulnerability, because it only affects the user's other programs, and a malicious program could achieve the same effect in any number of other, more reliable ways.)
Memory is organized into PAGES. Each process sees a logical address space consisting of pages numbered 0-N.
The logical address space is divided into two ranges: user space and system space.
Each process has its own unique user space and all processes share the same system space. Your user space page 10 maps to a different physical location then some other process's user space page 10 (in most cases).
Memory in the system space is protected from user mode access. The only way to write to it is to switch to kernel mode. The operating system limits how you can do that to calls to specific system services. So, absent bugs (but we are talking M$ here) you should not be able to modify the system space willy-nilly.
It is possible for two applications to map memory in such a way that they are sharing memory locations in user mode. In that case, you can screw things up doing the type of thing you are illustrating. However, you have to explicitly map the memory in both processes.
Every process has its own address space. This space is mapped to the real adresses. They don't overlap.

could I potentially dodge segmentation faults by adding ram?

just as my title asks, im wondering if I can potetionally dodge segmention fault crashes of my program if I add more ram? any advice on how to dodge is appreciated but this question is quite crucial since it depends if ill upgrade to 32gb ram instead of 8
program is written in c++
Just like "out of memory", "segmentation fault" does not refer to RAM.
In a typical modern computer, each process gets its own address space. That's just a bunch of addresses. Some of those addresses are likely to map to RAM but they can also map to ROM, to VRAM, to files on disk, or to anything else the operating system supports mapping to a process address space.
Segmentation faults are invalid accesses to parts of a process address space. They can be invalid because the address does not exist (because it wasn't mapped to anything), or because the address cannot be written to (because it was mapped in a read-only manner). They are caused by bugs in the program.
Adding RAM won't change the size or layout of any process address space.
No, memory an application sees is virtual. That means that the OS remaps the addresses the application sees to backing physical memory.
As an optimization memory the applciation doesn't request does not get mapped to real memory and if the application tries to access it will generate a fault.
So it doesn't matter wether you have 16 MB or 16 GB of physical ram. Segfaults happen when a bug in the program leads it to try and access memory that it never got.

Memory usage in C++ program, as reported by Gnome resource monitor: confusion

I am looking at the memory consumed by my app to make sure I am not allocating too much, and am confused as to what Gnome Resource Monitor is showing me. I have used the following pieces of code to allocate memory in two separate apps that are otherwise identical; they contain nothing other than this code and a scanf() call to pause execution whilst I grab the memory usage:
malloc(1024 * 1024 * 100);
and
char* p = new char[1204*1024*100];
The following image shows the memory usage of my app before and after each of these lines:
Now, I have read a lot (but obviously not enough) about memory usage (including this SO question), and am having trouble differentiating between writeable memory and virtual memory. According to the linked question,
"Writeable memory is the amount of address space that your process has
allocated with write privileges"
and
"Virtual memory is the address space that your application has
allocated"
1) If I have allocated memory myself, surely it has write privileges?
2) The linked question also states (regarding malloc)
"...which won't actually allocate any memory. (See the rant at the end
of the malloc(3) page for details.)"
I don't see any "rant", and my images show the virtual memory has increased! Can someone explain this please?
3) If I have purely the following code:
char* p = new char[100];
...the resource monitor shows that both Memory and Writeable Memory have increased by 8KB - the same as when I was allocating a full one megabyte! - with Virtual memory increasing by 0.1. What is happening here?
4) What column should I be looking at in the resource monitor to see how much memory my app is using?
Thanks very much in advance for participation, and sorry if have been unclear or missed anything that could have led me to find answers myself.
A more precise way to understand on Linux the memory usage of a running process is to use the proc(5) file system.
So, if your process pid is 1234, try
cat /proc/1234/maps
Notice that processes are having their address space in virtual memory. That address space can be changed by mmap(2) and other syscalls(2). For several efficency reasons malloc(3) and free avoid to make too much of these syscalls, and prefer to re-use previously free-d memory zones. So when your program is free-ing (or, in C++, delete-ing) some memory chunk, that chunk is often marked as re-usable but is not released back to the kernel (by e.g. munmap). Likewise, if you malloc only 100 bytes, your libc is allowed to e.g. request a whole megabyte using mmap (the next time you are calling malloc for e.g. 200 bytes, it will use part of that magabyte)
See also http://linuxatemyram.com/ and Advanced Linux Programming (and this question about memory overcommit)
The classes of memory reported by the Gnome resource monitor (and in fact, the vast majority of resource reporting tools) are not simply separate classes of memory - there is overlap between them because they are reporting on different characteristics of the memory. Some of those different characteristics include:
virtual vs physical - all memory in a processes address space on modern operating systems is virtual; that virtual address space is mapped to actual physical memory by the hardware capabilities of the CPU; how that mapping is done is a complex topic in itself, with a lot of differences between different architectures
memory access permissions - memory can be readable, writable, or executable, or any combination of the three (in theory - some combinations don't really make sense and so may actually not be allowed by hardware and/or software, but the point is that these permissions are treated separately)
resident vs non-resident - with a virtual memory system, much of the address space of a process may not actually be currently mapped to real physical memory, for a variety of reasons - it may not have been allocated yet; it may be part of the binary or one of the libraries, or even a data segment that has not yet been loaded because the program has not called for it yet; it may have been swapped out to a swap area to free up physical memory for a different program that needed it
shared vs private - parts of a processes virtual address space that are read-only (for example, the actual code of the program and most of the libraries) may be shared with other processes that use the same libraries or program - this is a big advantage for overall memory usage, as having 37 different xterm instances running does not mean that the code for xterm needs to be loaded 37 different times into memory - all the processes can share one copy of the code
Because of these, and a few other factors (IPC shared memory, memory-mapped files, physical devices that have memory regions mapped in hardware, etc.), determining the actual memory in use by any single process, or even the entire system, can be complicated.

Dangers of stack overflow and segmentation fault in C++

I'm trying to understand how the objects (variables, functions, structs, etc) work in c++. In this case I see there are basically two ways of storing them: the stack and the heap. Accordingly, whenever the heap storage is used it needs to be dealocated manually, but if the stack is used, then the dealocation is automaticcally done. so my question is related to the kinds of problems that bad practice might cause the program itself or to the computer. For example:
1.- Let'suposse that I run a program with a recursion solution by using an infinite iteration of functions. Theoretically the program crashes (stack overflow), but does it cause some trouble to the computer itself? (To the RAM maybe or to the SO).
2.- What happens if I forget to dealocate memory on the heap. I mean, does it just cause trouble to the program or it is permanent to the computer in general. I mean it might be that such memory could not be used never again or something.
3.- What are the problems of getting a segmentation fault (the heap).
Some other dangers or cares relevant to this are welcome.
Accordingly, whenever the stack storage is used it needs to be
dealocated manually, but if the heap is used, then the dealocation is
automaticcally done.
When you use stack - local variables in the function - they are deallocated automatically when the function ends (returns).
When you allocate from the heap, the memory allocated remains "in use" until it is freed. If you don't do that, your program, if it runes for long enough and keep allocating "stuff", will use all memory available to it, and eventually fail.
Note that "stackfault" is almost impossible to recover from in an application, because the stack is no longer usable when it's full, and most operations to "recover from error" will involve using SOME stack memory. The processor typically has a special trap to recover from stack fault, but that lands insise the operating system, and if the OS determines the application has run out of stack, it often shows no mercy at all - it just "kills" the application immediately.
1.- Let'suposse that I run a program with a recursion solution by using an infinite iteration of functions. Theoretically the program
crashes (stack overflow), but does it cause some trouble to the
computer itself? (To the RAM maybe or to the SO).
No, the computer itself is not harmed by this in and of itself. There may of course be data-loss if your program wasn't saving something that the user was working on.
Unless the hardware is very badly designed, it's very hard to write code that causes any harm to the computer, beyond loss of stored data (of course, if you write a program that fills the entire hard disk from the first to the last sector, your data will be overwritten with whatever your program fills the disk with - which may well cause the machine to not boot again until you have re-installed an operating system on the disk). But RAM and processors don't get damaged by bad coding (fortunately, as most programmers make mistakes now and again).
2.- What happens if I forget to dealocate memory on the heap. I mean, does it just cause trouble to the program or it is permanent to the
computer in general. I mean it might be that such memory could not be
used never again or something.
Once the program finishes (and most programs that use "too much memory" does terminate in some way or another, at some point).
Of course, how well the operating system and other applications handle "there is no memory at all available" varies a little bit. The operating system in itself is generally OK with it, but some drivers that are badly written may well crash, and thus cause your system to reboot if you are unlucky. Applications are more prone to crashing due to there not being enough memory, because allocations end up with NULL (zero) as the "returned address" when there is no memory available. Using address zero in a modern operating system will almost always lead to a "Segmentation fault" or similar problem (see below for more on that).
But these are extreme cases, most systems are set up such that one application gobbling all available memory will in itself fail before the rest of the system is impacted - not always, and it's certainly not guaranteed that the application "causing" the problem is the first one to be killed if the OS kills applications simply because they "eat a lot of memory". Linux does have a "Out of memory killer", which is a pretty drastic method to ensure the system can continue to work [by some definition of "work"].
3.- What are the problems of getting a segmentation fault (the heap).
Segmentation faults don't directly have anything to do with the heap. The term segmentation fault comes from older operating systems (Unix-style) that used "segments" of memory for different usages, and "Segmentation fault" was when the program went outside it's allocated segment. In modern systems, the memory is split into "pages" - typically 4KB each, but some processors have larger pages, and many modern processors support "large pages" of, for examble, 2MB or 1GB, which is used for large chunks of memory.
Now, if you use an address that points to a page that isn't there (or isn't "yours"), you get a segmentation fault. This, typically will end the application then and there. You can "trap" segmentation fault, but in all operating systems I'm aware of, it's not valid to try to continue from this "trap" - but you could for example store away some files to explain what happened and help troubleshoot the problem later, etc.
Firstly, your understanding of stack/heap allocations is backwards: stack-allocated data is automatically reclaimed when it goes out of scope. Dynamically-allocated data (data allocated with new or malloc), which is generally heap-allocated data, must be manually reclaimed with delete/free. However, you can use C++ destructors (RAII) to automatically reclaim dynamically-allocated resources.
Secondly, the 3 questions you ask have nothing to do with the C++ language, but rather they are only answerable with respect to the environment/operating system you run a C++ program in. Modern operating systems generally isolate processes so that a misbehaving process doesn't trample over OS memory or other running programs. In Linux, for example, each process has its own address space which is allocated by the kernel. If a misbehaving process attempts to write to a memory address outside of its allocated address space, the operating system will send a SIGSEGV (segmentation fault) which usually aborts the process. Older operating systems, such as MS-DOS, didn't have this protection, and so writing to an invalid pointer or triggering a stack overflow could potentially crash the whole operating system.
Likewise, with most mainstream modern operating systems (Linux/UNIX/Windows, etc.), memory leaks (data which is allocated dynamically but never reclaimed) only affect the process which allocated them. When the process terminates, all memory allocated by the process is reclaimed by the OS. But again, this is a feature of the operating system, and has nothing to do with the C++ language. There may be some older operating systems where leaked memory is never reclaimed, even by the OS.
1.- Let'suposse that I run a program with a recursion solution by using an infinite iteration of functions. Theoretically the program crashes (stack overflow), but does it cause some trouble to the computer itself? (To the RAM maybe or to the SO).
A stack overflow should not cause trouble neither to the Operating System nor to the computer. Any modern OS provides an isolated address space to each process. When a process tries to allocate more data in its stack than space is available, the OS detects it (usually via an exception) and terminates the process. This guarantees that no other processes are affected.
2.- What happens if I forget to dealocate memory on the heap. I mean, does it just cause trouble to the program or it is permanent to the computer in general. I mean it might be that such memory could not be used never again or something.
It depends on whether your program is a long running process or not, and the amount of data that you're failing to deallocate. In a long running process (e.g. a server) a recurrent memory leak can lead to thrashing: after some time, your process will be using so much memory that it won't fit in your physical memory. This is not a problem per se, because the OS provides virtual memory but the OS will spend more time moving memory pages from your physical memory to disk than doing useful work. This can affect other processes and it might slow down the system significantly (to the point that it might be better to reboot it).
3.- What are the problems of getting a segmentation fault (the heap).
A Segmentation Fault will crash your process. It's not directly related to the usage of the heap, but rather to accessing a memory region that does not belong to your process (because it's not part of its address space or because it was, but it was freed). Depending on what your process was doing, this can cause other problems: for instance, if the process was writing to a file when the crash happened it's very likely that it will end up corrupt.
First, stack means automatic memory and heap means manual memory. There are ways around both, but that's generally a more advanced question.
On modern operating systems, your application will crash but the operating system and machine as a whole will continue to function. There are of course exceptions to this rule, but they're (again) a more advanced topic.
Allocating from the heap and then not deallocating when you're done just means that your program is still considered to be using the memory even though you're not using it. If left unchecked, your program will fail to allocate memory (out of memory errors). How you handle out-of-memory errors could mean anything from a crash (unhandled error resulting in an unhandled exception or a NULL pointer being accessed and generating a segmentation fault) to odd behavior (caught exception or tested for NULL pointer but no special handling case) to nothing at all (properly handled).
On modern operating systems, the memory will be freed when your application exits.
A segmentation fault in the normal sense will simply crash your application. The operating system may immediately close file or socket handles. It may also perform a dump of your application's memory so that you can try to debug it posthumously with tools designed to do that (more advanced subject).
Alternatively, most (I think?) modern operating systems will use a special method of telling the program that it has done something bad. It is then up to the program's code to decide whether or not it can recover from that or perhaps add additional debug information for the operating system, or whatever really.
I suggest you look into auto pointers (also called smart pointers) for making your heap behave a little bit like a stack -- automatically deallocating memory when you're done using it. If you're using a modern compiler, see std::unique_ptr. If that type name can't be found, then look into the boost library (google). It's a little more advanced but highly valuable knowledge.
Hope this helps.

C++: Can I get out of the bounds of my app's memory with a pointer?

If I have some stupid code like this:
int nBlah = 123;
int* pnBlah = &nBlah;
pnBlah += 80000;
*pnBlah = 65;
Can I change another app's memory?
You have explained me this is evil, I know. But I was just interested.
And this isn't something to simply try. I don't know what would happen.
Thanks
In C++ terms, this is undefined behavior. What will actually happen depends on many factors, but most importantly it depends on the operating system (OS) you are using. On modern memory-managed OS's, your application will be terminated with a "segmentation fault" (the actual term is OS-dependent) for attempting to access memory outside of your process address space. Some OS's however don't have this protection, and you can willy-nilly poke and destroy things that belong to other programs. This is also usually the case if your code is inside kernel space, e.g. in a device driver.
Nope, it's not that simple. :)
Modern operating systems use virtual memory.
Every process is provided with a full virtual address space.
Every process is given its own "view" of all addresses (from 0x00000000 to 0xffffffff on a 32-bit system). Processes A and B can both write to the same address, without affecting each others, because they're not accessing physical memory addresses, but virtual addresses. When a process tries to access a virtual address, the OS translates that into some other physical address to avoid collisions.
Essentially, the OS keeps track of a table of allocate memory pages for every process. It tracks which address ranges have been allocated to a process, and which physical addresses they're mapped to. If a process tries to access an address not allocated to it, you get an access violation/segmentation fault. And if you try to access an address that is allocated to your process, you get your own data. So there is no way to read other processes data just by typing in the "wrong" address.
Under modern operating systems you don't get access to the real memory, but rather a virtual memory space of 4gb (under 32bit). Bottom 2gb for you to use, and top 2gb reserved for the operating system.
This does not reflect to actual memory bytes in the RAM.
Every app get's the same virtual address space, so there is no straight forward way of accessing another process's memory space.
I think this would raise 0x00000005, access violation on windows
Modern operating systems have various means of protecting against these kinds of exploits that write into the memory space of other programs. Your code wouldn't work either way, I don't think.
For more information, read up on Buffer Overflow exploits and how they gave Microsoft hell prior to the release of Windows XP SP2.