Address of stack variable keeps changing [duplicate] - c++

I recently came upon a Microsoft article that touted new "defensive enhancements" of Windows 7. Specifically:
Address space layout randomization (ASLR)
Heap randomization
Stack randomization
The article went on to say that "...some of these defenses are in the core operating system, and the Microsoft Visual C++ compiler offers others" but didn't explain how these strategies would actually increase security.
Anyone know why memory randomization increases security, if at all? Do other platforms and compilers employ similar strategies?

It increases security by making it hard to predict where something will be in memory. Quite a few buffer overflow exploits work by putting (for example) the address of a known routine on the stack, and then returning to it. It's much harder to do that without knowing the address of the relevant routine.
As far as I know, OpenBSD was about the first to do this, at least among the reasonably well-known OSes for PCs.

It makes attacks like return to libc (or return to user-provided data buffer in the case of the latter two) much harder. And yes, it is available in Linux, BSD, and Mac OS. As you would expect, the details vary by OS. See Wikipedia for an introduction.

By randomizing the stack you make vanilla buffer overflow attacks like Aleph One's Smashing the Stack for Fun Profit impossible. The reason why is because the attack is relying on placeing a small ammount of executable code calld shellcode into a predictable location in memory. The function stack frame is corrupted and its return address overwritten with a value that the attacker chooses. When the corrupted function returns the the flow of execution moves to attacker's shellcode. Traditionally this memory address is so predictable that it would be identical on all machines that are running the same version of the software.
Despite advanced memory protection implemented on Windows 7 remote code execution is still possible. Recently at CanSecWest a machine running Windows 7 and IE 8 was hacked within seconds. Here is a technical description of a modern memory corruption attack utilizing a dangling pointer in conjunction with a heap overflow.

Related

What amount of memory is free for us to use it(C++ , windows 7)

When dynamically allocating some objects or variables in C++ (I'm using Windows 7).. is there a way to find out how much memory(in bytes) is there free for us to use so we can prevent a crash? Also I would like to know is it OS-specific? If it is, what's the difference for example between windows and some other widely used OS?
You can't easily find out how much free memory there is. Even the concept of free memory is unclear, since the OS may offer disk-backed virtual memory. Essentially, on modern personal computers and up, the main problem isn't running out of memory but running out of fast memory, getting into a regime with much page file activity and things really slowing down.
If a dynamic memory allocation fails in C++, you get a std::bad_alloc exception.
You can install a so called new-handler to deal with an out-of-memory situation. It can log something and fail, or perhaps release some memory from a crisis fund (so to speak). In some cases this may allow a controlled program exit.
Even if you do find out much memory would have been available at the time of the checking call, by the time you get to your allocation some other process or thread in your process may have used up much of that, so that the allocation still fails.
Thus, you need to either be prepared for allocation failure, or design for so reasonable memory consumption that you feel safe in just ignoring the issue.
Thus, the answer to your question …
“is there a way to find out how much memory(in bytes) is there free for us to use so we can prevent a crash?”
is “no” – crashing (presumably due to not handling the bad_alloc exception) can not be prevented by checking available memory beforehand.
It depends not upon the OS but upon the processor architecture.The amount of memory available to a process is determined by the number of address pins available in the processor.
If you are about to allocate a contiguous space,say array, that can be more difficult and a very less number of cells can be available.
The best approach would be allow the error to happen.malloc returns NULL in case of no memory available/error.Check that case take necessary recovery action.

Is there a limit of stack size of a process in linux

Is there a limit on the stack size of a process in Linux? Is it simply dependent on the RAM of the machine?
I want to know this in order to limit the depth of recursive calls to a function.
The stack is normally limited by a resource limit. You can see what the default settings are on your installation using ulimit -a:
stack size (kbytes, -s) 8192
(this shows that mine is 8MB, which is huge).
If you remove or increase that limit, you still won't be able to use all the RAM in the machine for the stack - the stack grows downward from a point near the top of your process's address space, and at some point it will run into your code, heap or loaded libraries.
The limit can be set by the admin.
See man ulimit.
There is probably a default which you cannot cross. If you have to worry about stack limits, I would say you need to rethink your design, perhaps write an iterative version?
It largely depends what architecture you're on (32 or 64-bit) and whether you're multithreaded or not.
By default in a single threaded process, i.e. the main thread created by the OS at exec() time, your stack usually will grow until it hits something else in the address space. This means that it is generally possible, on a 32-bit machine, to have, say 1G of stack.
However, this is definitely NOT the case in a multithreaded 32-bit process. In multithreaded procesess, the stacks share address space and hence need to be allocated, so they typically get given a small amount of address space (e.g. 1M) so that many threads can be created without exhausting address space.
So in a multithreaded process, it's small and finite, in a single threaded one, it's basically until you hit something else in the address-space (which the default allocation mechanism tries to ensure doesn't happen too soon).
In a 64-bit machine, of course there is a lot more address space to play with.
In any case you can always run out of virtual memory, in which case you'll get a SIGBUS or SIGSEGV or something.
Would have commented on the accepted answer but apparently I need more rep....
True Stack Overflow can be subtle and not always cause any error messages or warnings. I just had a situation where the only symptom was that socket connections would fail with strange SSL errors. Everything else worked fine. Threads could malloc(), grab locks, talk to the DB, etc. But new connections would fail at the SSL layer.
With stack traces from well within GnuTLS I was quite confused about the true cause. Nearly reported the traces to their team after spending lots of time trying to figure it out.
Eventually found that the stacksize was set to 8Mb and immediately upon raising it the problems vanished. Lowering the stack back to 8Mb brought the problem back (ABA).
So if you are troubleshooting what appears to be strange socket errors without any other warnings or uninitialized memory errors.... it could be stack overflow.

Returning the size of available virtual memory at run-time in C++

In C++ is there a predefined library function that will return the size of RAM currently available on a computer a program is being run on, at run-time?
For instance, if an object is 4bytes, then can we divide the available virtual memory by 4 bytes to give an estimate of how many more objects could be stored by the program safely?
I have used the sizeof() function to return the size of objects within my program.
Seeing as this was frequently asked for in the helpful responses - The platform the program is running on is Windows (7).
Thanks
Not in the C++ Standard Library - your operating system probably provides this facility though, via a platform-specific API.
There's nothing in the C++ standard that returns the amount of free memory available. Such a function, if available at all, would be platform-specific.
First of all size of the RAM has nothing to do with how much free virtual memory available in the process. It just that your program will slow down if the RAM is less due to frequent page faults. Also, the virtual memory will be mostly fragmented so it makes more sense to find the things such as largest continuous free memory instead of total free memory.
There are no built in C++ functions to do this you have use OS API's to get it. For example, on windows you can use the Win32 APIs to get this information.
It's platform specific, not part of the language standard.
However, there's a Windows specific API to get process memory informations: GetProcessMemoryInfo().
Additionally, virtual addressing allow processes to allocate more than total physical RAM.
In Win32 you can use
MEMORYSTATUS st;
::GlobalMemoryStatus(&st);
There is no good solution for this in Windows. When a program frees a heap block, it almost always gets added to a list of free blocks. You can only discover these is by walking the heap with HeapWalk(). That's expensive and very detrimental to the operation of a multi-threaded program because you have to lock the heaps.
Also, a program almost never runs out of free virtual memory space. It first runs out of a free contiguous chunk of space that's large enough to fit the request. The sum of block sizes you get from HeapWalk is not meaningful unless you only ever make very small allocations.
The most typical reason for wanting a feature like this is because your program is routinely running out of memory. There is a very effective and cheap solution available for that problem. Two hundred bucks buys you a 64-bit version of Windows.

Information about PTE's (Page Table Entries) in Windows

In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes.
Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes:
since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet).
even when I only need a few hundreds of megabytes, the allocations fail at a certain moment.
The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process.
My questions (or a cry-for-tips):
Where can I find information about the maximum number of PTE's in a process?
Is this different (higher) for 64-bit systems/applications or not?
Can the number of PTE's be configured in the application or in Windows?
Thanks,
Patrick
PS. note for those who will try to argument that you shouldn't write your own memory manager:
My application is rather specific so I really want full control over memory management (can't give any more details)
Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption")
We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either
We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)
There is what i thought was a great series of blog posts by Mark Russinovich on technet called "Pushing the limits of Windows..."
http://blogs.technet.com/markrussinovich/archive/2008/07/21/3092070.aspx
It has a few articles on virtual memory, paged nonpaged memory, physical memory and others.
He mentions little utilities he uses to take measurements about a systems resources.
Hopefully you will find your answers there.
A shotgun approach is to allocate those isolated 4KB entries at random. This means that you will need to rerun the same tests, with the same input repeatedly. Sometimes it will catch the error, if you're lucky.
A slightly smarter approach is to use another algorithm than just random - e.g. make it dependent on the call stack whether an allocation is isolated. Do you trust std::string users, for instance, and suspect raw malloc use?
Take a look at the implementation of OpenBSD malloc. Much of the same ideas (and more) implemented by very skilled folk.
In order to find more easily buffer
overflows I am changing our custom
memory allocator so that it allocates
a full 4KB page instead of only the
wanted number of bytes.
This has already been done. Application Verifier with PageHeap.
Info on PTEs and the Memory architecture can be found in Windows Internals, 5th Ed. and the Intel Manuals.
Is this different (higher) for 64-bit systems/applications or not?
Of course. 64bit Windows has a much larger address space, so clearly more PTEs are needed to map it.
Where can I find information about the
maximum number of PTE's in a process?
This is not so important as the maximum amount of user address space available in a process. (The number of PTEs is this number divided by the page size.)
This is 2GB on 32 bit Windows and much bigger on x64 Windows. (The actual number varies, but it's "big enough").
Problem is that although I have enough
memory, the application never starts
up completely because it runs out of
memory.
Are you a) leaking memory? b) using horribly inefficient algorithms?

Pointer stability under Windows Vista

I have been using Visual Studio 2005 under Windows XP Pro 64-bit for C and C++ projects for a while. One of the popular tricks I have been using from time to time in the debugger was to remember a numeric pointer value from the previous debugging run of the program (say 0x00000000FFAB8938), add it to watch window with a proper typecast (say, ((MyObject *) 0x00000000FFAB8938)->data_field) and then watch the memory occupied by the object during the next debugging run. In many cases this is quite a convenient and useful thing to do, since as long as the code remains unchanged, it is reasonable to expect that the allocated memory layout will remain unchanged as well. In short, it works.
However, relatively recently I started using the same version of Visual Studio on a laptop with Windows Vista (Home Premium) 64-bit. Strangely enough, it is much more difficult to use this trick in that setup. The actual memory address seems to change rather often from run to run for no apparent reason, i.e. even when the code of the program was not changed at all. It appears that the actual address is not changing entirely randomly, it just selects one value from a fixed more-or-less stable set of values, but in any case it makes it much more difficult to do this type of memory watching.
Does anyone know the reason of this behavior in Windows Vista? What is causing the change in memory layout? Is that some external intrusion into the process address space from other [system] processes? Or is it some quirk/feature of Heap API implementation under Vista? Is there any way to prevent this from happening?
Windows Vista implements address space layout randomization, heap randomization, and stack randomization. This is a security mechanism, trying to prevent buffer overflow attacks that rely on the knowledge of where each piece of code and data is in memory.
It's possible to turn off ASLR by setting the MoveImages registry value. I couldn't find a way to disable heap randomization, but some Microsoft guy recommends computing addresses relative to _crtheap. Even if the heap moves around, the relative address may remain stable.