I have been using Visual Studio 2005 under Windows XP Pro 64-bit for C and C++ projects for a while. One of the popular tricks I have been using from time to time in the debugger was to remember a numeric pointer value from the previous debugging run of the program (say 0x00000000FFAB8938), add it to watch window with a proper typecast (say, ((MyObject *) 0x00000000FFAB8938)->data_field) and then watch the memory occupied by the object during the next debugging run. In many cases this is quite a convenient and useful thing to do, since as long as the code remains unchanged, it is reasonable to expect that the allocated memory layout will remain unchanged as well. In short, it works.
However, relatively recently I started using the same version of Visual Studio on a laptop with Windows Vista (Home Premium) 64-bit. Strangely enough, it is much more difficult to use this trick in that setup. The actual memory address seems to change rather often from run to run for no apparent reason, i.e. even when the code of the program was not changed at all. It appears that the actual address is not changing entirely randomly, it just selects one value from a fixed more-or-less stable set of values, but in any case it makes it much more difficult to do this type of memory watching.
Does anyone know the reason of this behavior in Windows Vista? What is causing the change in memory layout? Is that some external intrusion into the process address space from other [system] processes? Or is it some quirk/feature of Heap API implementation under Vista? Is there any way to prevent this from happening?
Windows Vista implements address space layout randomization, heap randomization, and stack randomization. This is a security mechanism, trying to prevent buffer overflow attacks that rely on the knowledge of where each piece of code and data is in memory.
It's possible to turn off ASLR by setting the MoveImages registry value. I couldn't find a way to disable heap randomization, but some Microsoft guy recommends computing addresses relative to _crtheap. Even if the heap moves around, the relative address may remain stable.
Related
I recently came upon a Microsoft article that touted new "defensive enhancements" of Windows 7. Specifically:
Address space layout randomization (ASLR)
Heap randomization
Stack randomization
The article went on to say that "...some of these defenses are in the core operating system, and the Microsoft Visual C++ compiler offers others" but didn't explain how these strategies would actually increase security.
Anyone know why memory randomization increases security, if at all? Do other platforms and compilers employ similar strategies?
It increases security by making it hard to predict where something will be in memory. Quite a few buffer overflow exploits work by putting (for example) the address of a known routine on the stack, and then returning to it. It's much harder to do that without knowing the address of the relevant routine.
As far as I know, OpenBSD was about the first to do this, at least among the reasonably well-known OSes for PCs.
It makes attacks like return to libc (or return to user-provided data buffer in the case of the latter two) much harder. And yes, it is available in Linux, BSD, and Mac OS. As you would expect, the details vary by OS. See Wikipedia for an introduction.
By randomizing the stack you make vanilla buffer overflow attacks like Aleph One's Smashing the Stack for Fun Profit impossible. The reason why is because the attack is relying on placeing a small ammount of executable code calld shellcode into a predictable location in memory. The function stack frame is corrupted and its return address overwritten with a value that the attacker chooses. When the corrupted function returns the the flow of execution moves to attacker's shellcode. Traditionally this memory address is so predictable that it would be identical on all machines that are running the same version of the software.
Despite advanced memory protection implemented on Windows 7 remote code execution is still possible. Recently at CanSecWest a machine running Windows 7 and IE 8 was hacked within seconds. Here is a technical description of a modern memory corruption attack utilizing a dangling pointer in conjunction with a heap overflow.
After a long debugging effort, I found out that my application probably writes a wrong value to address 0x5b81730. I would like to find out which part of my code does this.
Some time ago, when I used Windows XP, this would be very easy. I would restart my application in a debugger (MS Visual Studio 2005), set a data breakpoint at that address, and the debugger would point my the offending code.
Now, after I switched to Windows 7, this seems impossible (or at least very hard). When I run my application, I see that each time the addresses of the same object in the heap are slightly different (e.g. 0x53b71b4 in one run but 0x55471b4 in another).
I have heard that Windows 7 has ASLR, which might be the reason I see these changes in addresses.
So what can I do to continue using my debugging technique?
Should I turn off ASLR? (I believe it's possible but couldn't find out how to do it)
Or is my problem caused by something else and not ASLR?
Or should I forget the convenience of using data breakpoints, and use some other techniques?
If you are using something like UB, there is absolutely no guarantee at all what the address will be. You cannot depend on it being the same every time.
However, you can try disabling ASLR in the Linker settings - one of the attributes there is "Randomized Base Address".
The command-line syntax is /DYNAMICBASE:NO. It doesn't exist in Visual Studio 2005, but it does exist in VS 2012 and later.
I would try using Application Verifier. It is a great way to debug memory leak issues. It will break execution of your code when there is a memory corruption issue.
I have a c++ program which I'm running on a windows 7 64bit machine, using Eclipse as my IDE. I use mingw32 for a compiler.
The problem: When I debug the program using the gdb debugger it runs just fine and does what it needs to do. But when I run it without debugging, either from the command line or from within eclipse (using the same configuration as with the debug), it crashes.
I tried running the program from the command line and attaching to the process using the debugger, and what I saw is that it reaches the following line of code:
anc_map[ancestry].hap_array = (char**)calloc(anc_map[ancestry].nr_hap , sizeof(char*));
and just hangs (cpu is not working, and nothing happens although the program is still running).
The above line is actually called more than once, and the hanging occurs the second time it is called (it works the first time).
Any idea what can be the cause for this behavior?
Thanks,
Itamar.
Edit:
I realize that using calloc is a little old-fashioned, but since this is a legacy code I just need to modify a little, I'm trying to avoid doing major refactoring.
I've tried compiling the code and running on linux, and the problem does not occur there, so it has something to do with my configuration on the windows machine
First thing that comes to mind is whether anc_map[ancestry].nr_hap could be some bogus, probably huge, number. Probably because any of the variables got corrupt. I am not sure why it would get corrupt only without debugger, but it might be that the debugger affects where things are allocated and the corruption appears somewhere less harmful when debugging.
The other thing that comes to mind is, that if the program needs a lot of memory, the debugger might affect the 2-GB limit flag in Windows, so in one case there is enough memory and the other way you run out. I am, however, not sure how to change it with mingw32 compiler, as I only did it with the Microsoft one (/LARGEADDRESSAWARE option to Microsoft link and editbin). The reason is, that in some old software, they noticed people doing binary search like (whatever *)(((unsigned)begin + (unsigned)end)/2), which, besides being incorrect C, does not work if the pointers are above 2GB, because the calculation overflows. So for old software, written before more than 2GB was common, they limited the memory to 2GB and provided option to get more, which means 3GB on 32-bit Windows (the last 1GB maps kernel space to avoid swapping page tables on kernel entry and exit; linux does the same thing) and 4GB for 32-bit process on 64-bit windows (the kernel can be mapped above 4GB there).
Hm, but most likely it's actually corruption of the memory management metadata, because that's the usual case where memory allocation or deallocation functions just hang instead of returning an error. Again the debugger would cause some addresses to be different and the corruption to happen elsewhere.
In the first and last case the corruption would probably be always there, so you might have some luck trying to run it:
In linux under valgrind.
Under DUMA, but the Microsoft standard runtime library will try to resist replacing the memory allocation functions rather hard; I finally gave up when I found that IO streams use something like __debug_delete, but normal new. Or the other way around; I don't recall exactly.In either case one was the standard allocation function and the other was some their internal undocumented function. It will also use much more memory than usual, because each allocation will be at least 8kB. In Linux it's trivial, because GNU libc has special support for overriding memory allocation, but valgrind is superior there anyway.
I'm hesitant to ask this question because of the vagueness of the situation, but I'd like to understand how this is possible. I have a C++ application developed using Visual Studio 2008. When I compile the application on Windows 7 64-bit (or Vista 32-bit), the application runs fine. When I compile the application on 32-bit Windows XP SP3, I receive a buffer overrun warning and the process terminates. This is using the same verison of the Visual Studio 2008 C++ compiler. How is it that I receive a buffer overrun on XP, but not on other Windows platforms?
Write code so you don't have buffer overruns and you won't have this problem on any platform. Namely, make sure you check the bounds for the buffer you are accessing to make sure you aren't trying to read/write outside of the proper bounds.
Luck, the fundamental undeterminedness of the Universe, or (more likely than the previous) an implementation detail that changed in msvcrt.dll between XP and 7.
Bottom line is you have a bug in your application, and you should fix it.
You probably have a buffer overrun in both case, in the first it isn't detected and doesn't (apparently) do any harm. In the second it is detected. (If it is on a dynamically allocated memory, you have to know that allocators often allocate more than what asked, thus a plausible explanation is that in the first case the overrun stay in that zone, in the second it doesn't).
Sizes of data types might change from one compiler to another (thanks #AndreyT). Using hardcoded numbers like sizeof(4) to represent the size of a data type on your code, might pop up a bug on your application. You should use sizeof(int) instead or whatever type you are interested in.
Windows-7 has a feature called fault-tolerant-heap which ,as it says, is tolerant about some faulty buffer accesses. Windows XP doesn't have this feature (Vista ,I don't know). There is a video about it on channel9.msdn.com or sysinternal.com (forgot exactly where) by Mark Russinovich.
Working on porting a 32bit Windows C++ app to 64 bit. Unfortunately, the code uses frequent casting in both directions between DWORD and pointer values.
One of the ideas is to reserve the first 4GB of virtual process space as early as possible during process startup so that all subsequent calls to reserve memory will be from virtual addresses greater than 4 GB. This would cause an access violation error any unsafe cast from pointer to DWORD and then back to pointer and would help catch errors early.
When I look at the memory map of a very simple one line C++ program, there are many libraries loaded within bottom 4GB? Is there a way to make sure that all libraries, etc get loaded only above 4GB?
Thanks
Compile your project with /Wp64 switch (Detect 64-bit Portability Issues) and fix all warnings.
As a programmer, what do I need to worry about when moving to 64-bit windows?
You could insert calls to VirtualAlloc() as early as possible in your application, to allocate memory in the lower 4GB. If you use the MEM_RESERVE parameter, then only virtual memory space is allocated and so this will only use a very small amount of actual RAM.
However, this will only help you for memory allocated from the heap - any static data in your program will have already been allocated before WinMain(), and so you won't be able to change it's location.
(As an aside, even if you could reserve memory before your main binary was loaded, I think that the main binary needs to be loaded at a specific address - unless it is a built as a position-independent executable.)
Bruce Dawson posted code for a technique to reserve the bottom 4 GB of VM:
https://randomascii.wordpress.com/2012/02/14/64-bit-made-easy/
It reserves most of the address space (not actual memory) using VirtualAlloc, then goes after the process heap with HeapAlloc, and finishes off the CRT heap with malloc. It is straightforward, fast, and works great. On my machine it does about 3.8 GB of virtual allocations and only 1 MB of actual allocations.
The first time I tried it, I immediately found a longstanding bug in the project I was working on. Highly recommended.
The best solution is to fix these casts ...
You may get away with it truncated the pointer regardless (Same as casting to a POINTER_32) because I believe windows favours the lower 4GB for your application anyway. This is in no way guaranteed, though. You really are best off fixing these problem.
Search the code for "(DWORD)" and fix any you find. There is no better solution ...
What you are asking for is, essentially, to run 64-bit code in a 32-bit memory mode with AWE enabled (ie lose all the real advantages of 64-bit). I don't think microsoft could be bothered providing this for so little gain ... and who can blame them?