I am sharing a memory between my driver and application.
After I did Allocating and Mapping, I clean up with Unmapping and Free.
I found that Allocating and Mapping would allocate a brand new virtual memory page in the application, which can be accessed by both driver and application, but it does not disappear after I did the clean up.
I want to know if I should expect the the page to be free after I did clean up.
I saw that MmAllocatePagesForMdlEx and MmMapLockedPagesSpecifyCache will allocate a brand new page of virtual memory in application, which did not appear in the memory list before. After I clean up with MmUnmapLockedPages and MmFreePagesFromMdl, I still see that allocated memory in the list, which is still able to be read and write by the VC++ debugger and application.
I have think of if the clean up failed, but another thing makes me confused. The clean up would zero the memory page, which makes me think it actually is working.
And there is one more thing I did. I use a tool called CE to read the memory but the result is it cannot be read. CE is using ReadProcessMemory to read the memory. So I again get confused.
To summarize, I did
Allocate - MmAllocatePagesForMdlEx
Map - MmMapLockedPagesSpecifyCache
Unmap - MmUnmapLockedPages
Free - MmFreePagesFromMdl
The result is
VC++ debugger can read the cleaned up memory
Application itself can read the cleaned up memory
CE which use standard API cannot read the cleaned up memory
CE which use standard API can see the memory info in the memory list
The method I used to test reading cleaned up memory
VC++ debugger - in memory tab, goto address
Application - value = ^(ULONG64^)addr; //star disabled
The main code are as below:
mdl = MmAllocatePagesForMdlEx(least, most, least, totalBytes, MmCached, 0);
MmGetSystemAddressForMdlSafe(mdl, NormalPagePriority);
MmUnmapLockedPages(userAddr, mdl);
MmFreePagesFromMdl(mdl);
I expected that after I did the last two code mentioned above, the allocated memory would be free because even the mapped physical memory is being free.
So again my question is, should I expect the cleaned up memory become No Access in application / usermode space.
Related
To improve the security of my application, I am trying to delete string data from the process memory, but since there is little information about this on the Internet, I could not write a working code.
Can anyone help me?
My pasted code:
void MemoryStringsClear() {
HANDLE hProc = OpenProcess(PROCESS_ALL_ACCESS, FALSE, GetCurrentProcessId());
MEMORY_BASIC_INFORMATION mbi;
char* addr = 0;
while (VirtualQueryEx(hProc, addr, &mbi, sizeof(mbi)))
{
if (mbi.State != MEM_COMMIT || mbi.Protect == PAGE_NOACCESS)
{
//char* buffer = new char[mbi.RegionSize];
//ReadProcessMemory(hProc, addr, buffer, mbi.RegionSize, nullptr);
if (addr) {
cout << "Addr: " << &addr << " is cleared!" << endl;
memset(addr, '0', mbi.RegionSize);
}
}
addr += mbi.RegionSize;
}
CloseHandle(hProc);
}
EDITED:
I chose this way of solving the problem because my application consists of many modules (.exe applications), some of which I cannot change.
There are some problems with your approach (my idea for a solution is further down):
Most of the strings listed are environment variables
All of the programs that run on your computer have access to those. They are copied to the memory space of every program on startup so every program knows where to look for certain files. There is no point in removing them from the memory of your application, since every application running on your computer already knows them.
You can see them by running cmd.exe, typing set and then pressing return.
OpenProcess and VirtualQueryEx are for accessing another process
You could simply use VirtualQuery, since you only want to access your own process.
I guess you are trying to get access to non-committed memory pages by doing this, but memset can only access committed, writable memory pages in your own program's address space. So those two approaches don't mix.
But there is a more important point to this:
Non-committed memory does not exist
If a memory page is not committed, there is no actual memory assigned to that address. That means, that there is nothing you can overwrite with zeroes. The memory containing your strings may already have been assigned to another application. Read some information about virtual memory management for details.
Most calls to free, delete or garbage collection do not always actually decommit the page
For efficiency reasons, when your code allocates and deallocates memory, your runtime library hands you down little scraps of a larger page of memory (called "heap") that is only decommitted if every single piece in it has been freed.
You could find freed blocks of memory by walking over the heap entries, but how that works depends on your C runtime library or other runtime libraries.
The operating system might move your strings around
If the operating systems detects that there is a shortage of memory, it can save your strings to disk to free up memory for other applications, and reloads them when your application again becomes active. It usually does not bother to clean the disk up afterwards. You have no influence on that (unless you format your hard drive).
My ideas for a solution
Before every call to free or delete in your code that frees
memory with sensitive information (and only those), you can call
memset(...) on that single block of memory. In C++, you can wrap that up in a class which clears its memory on destruction, as Alan Birtles pointed out in his comment.
I don't think there is a solution that you can simply pop onto an existing program that clears sensitive information after the memory has been freed.
This approach leaves only the last problem. You can only circumvent that if you never store your sensitive information unencrypted in memory. That is probably not feasible since that would mean that you do not handle it only encrypted.
What will be difficult or impossible
If you want to clear freed memory in other processes (the separate *.exe files you cannot change you refer to in your edit), you have to understand the internal heap layout of those and use WriteProcessMemory instead of memset.
But this does not catch the case where the other program actually decommits a page, since you do not know if the operating system has already reassigned it. When this happens is completely outside of your control.
You might also try to reimplement the free and delete functions in your C runtime library so they first clear the memory and then call the original version, but this only works if they are actually used by those *.exe files and they are dynamically linked. If these conditions are met, you might still have a hard time.
Define the security threats you want to protect against
To improve the security of my application,
What exactly are you trying to guard against? Have you verified that clearing process memory will actually work against the security attacks that you want to defend against?
Know how memory works
Find out how your operating system allocates both virtual and physical memory, otherwise wrong assumptions of how it works might cause you to implement ineffective solutions. Most computers systems use virtual memory, which means some of your memory might actually end up being copied to different places in physical RAM or to disk. On the other hand, if your process exits and a new process starts, most operating systems will clear the RAM used by the first process before assigning it to the second.
Ensure you have full control over the memory you want to clear
As Iziminza already mentioned, your process has virtual memory, but the operating system can choose how to back that virtual memory with physical memory. When it needs RAM for some other process, it can decide to move your data to a swap file on disk until it is needed again. In order to make clearing of memory using memset() meaningful, you must ensure there are no copies stored elsewhere. You can do this by using VirtualLock() on Windows, or mlock() on other operating systems. Even then, if the computer is going into hibernation mode, even locked memory is written to disk.
I have an application running on an ARM Cortex-A9. When I enter a certain portion of the code, I can see in the Linux tasks view 'top' that the application grows in memory usage until it gets Killed due to running out of physical memory.
Now, I have done some research on this and tried to implement mtrace, but it didn't give me very concise results. Basically I get something like this
Memory not freed:
-----------------
Address Size Caller
0x03aafe18 0x38 at 0x76e73c18
0x53a004a8 0x38 at 0x76e73c18
And I do not even think this is the big problem (maybe another smaller issue).
I also cannot use Valgrind (which would probably work great) because there is not enough space on the device to install it and a compiler...
So I fear that I just have to go through the code and look for something that could be causing growing memory usage. Is there a guide for this somewhere? In the code, "malloc" or "new" is almost never used.
I do have access to use gdb, if that can help.
One thing I am not clear on is if the following is a problem:
while(someloop){
...
double *someptr;
...
}
or like
while(someloop){
...
int32 someArray[100] = {0};
...
}
Of which there is a lot of in the code. When that loop comes around, and instantiates those variables or pointers, does it just keep using free space, or use the spaces from the last iteration?
If it is alocated on the stack, the memory is reused. However by alocating on heap you need to delete.
Also if you alocate with double * ptr; ... ptr = new double [5], you need to delete by delete [] ptr.
In C++ you can overwrite the new and delete operators to print some message for debuging.
Best would be to debug using gdb and see what object is created and not deleted.
It is possible that you use a class in your code that does not delete something internal.
Tip: for small objects alocating on stack is both faster and safer.
I was always under the impression that trying to access a dynamically freed (first allocated and later deleted/freed) memory would end up with a coredump.
However when I executed the below code it went through successfully.
int* ptr = new int(3);
delete ptr;
cout << "value : " << *ptr << " " << ptr;
So I went ahead and created a dangling pointer and explicitly tried to access the memory, but now it dumped.
int* aptr;
aptr = (int*)0x561778;
cout << "value : " << *aptr << " " << aptr;
1) If we cannot access a memory beyond a given process space, then how is it that I was able to access the memory after I freed/released it? ( First example )
Also, If that's the case then how do anti-viruses scan the memory allocated by other processes?
2) If I dynamically allocate but don't free the memory, then it would cause memory leak.
But what if my entire process was killed? or completed execution, so got closed.
Then wouldn't OS ensure to clean up all the resources allocated to this process? So would memory leak occur only for processes which are on a long run?
If this is true, then when I explicitly try to access the contents of another process how would an OS ensure that it doesn't free up this memory?
1) If we cannot access a memory beyond a given process space, then how
is it that I was able to access the memory after I freed/released it?
( First example )
Because the C or C++ runtime keeps a "heap" of memory, and when you call free or delete, the memory is not actually rendered unusable to the process, it is simply put back into the "free memory" region of the heap, so it will be re-used. This is done because it's very common for a process to allocate some memory, free it, allocate some again, free it, etc. For example
void readfile(const std::string& fname)
{
std::ifstream f(fname.c_str());
std::string* content = new std::string;
while(cin.getline(content))
{
...
}
delete content;
}
This function (stupidly, because we should not allocate a std::string) will first allocate space for std::string, and then space for the content [possibly in several portions] inside std:string when the getline call is reading the file. There may be other memory allocations in for example `std::ifstream too.
The heap is designed to minimise the number of times it asks the OS to map/unmap memory from the global physical memory to a particular process, since it's quite "expensive" in terms of performance to map and unmap virtual memory in nearly all processors (in particular, unloading the now defunct memory pages from other cores will involve sending a message to the other processor, the other processor stopping what it's currently doing, updating it's virtual mappings, and then answering back "I've done that", before continuing where it was). And of course, if the OS doesn't unmap the memory of the process when the process stops using it, that same process could indeed "use" that memory address to find content of other processes - which would be a bad thing, so the OS will force all processor cores to give up it's memory mappings before that bit of memory can be used again for another process [at least].
Edit: To clarify, the heap will sometimes release memory back to the OS. For example, if you make a LARGE allocation, and then free that same allocation, it may well unmap that immediately, and thus you wouldn't be able to access the memory after it has been freed. Any access of memory after it has been freed is undefined behaviour, the runtime can (and quite often will) do anything it likes with the memory at that point. The most common scenarios are:
Just keep it as it is, but put it in the "freed" pile.
Keep it around as freed memory, but fill it with some "magic" pattern to detect when it has been written to, so "use after free" can be detected (very good thing to detect!)
The memory is unmapped, and no longer available to the current process.
The memory is almost immediately allocated for another purpose, and used again.
The same OS can at different times use any of these scenarios, in almost any order.
Also, If that's the case then how do anti-viruses scan the memory
allocated by other processes?
Completely different question. They use either debug interfaces to read another process's memory, or their own kernel driver functionality that uses kernel functions that allow any process to read any other process's memory - after all, the kernel can do anything. [Actually, fairly often, the anti-virus software is more interested in what is being loaded into memory from a file, so apply file access filters that examine reads/writes of data to/from files, rather than scanning what is in memory]
2) If I dynamically allocate but don't free the memory, then it would
cause memory leak. But what if my entire process was killed? or
completed execution, so got closed. Then wouldn't OS ensure to clean
up all the resources allocated to this process? So would memory leak
occur only for processes which are on a long run?
A process' memory is freed when the process dies. Always, every time. Or you could cause a system to fail by starting a process that allocates a fair amount of memory, kill it on purpose, run it again, kill it, run, kill, etc. That would be a bad thing, right?
You can of course have a very quick memory leak. Try:
std:string str;
str.append(100000, 'x');
std::vector<std::string> v;
int i = 0;
while(1)
{
std::cout << "i=" << i << std::endl;
v.push_back(str);
It won't take that many seconds before the system starts swappng, and a little while later it will be killed (if you don't get bored and kill it first). Expect a linux system to get fairly unresponsive if you do this...
If this is true, then when I explicitly try to access the contents of
another process how would an OS ensure that it doesn't free up this
memory?
A normal process will not be able to access memory that belongs to another process - only through limited interfaces such as debug interfaces or specially written kernel drivers can you do this. Or by using OS-supported shared memory, where the same memory is mapped into two different processes with the permission of the OS, of course.
These methods of accessing memory of another process will involve some form of "reference counting" [in fact the same applies for example if the process is currently in a system call trying to save a 1000MB file, and the process is for one reason or another killed - say another thread causes a unrecoverable fault] - the OS keeps track of how many "users" there are of a given piece of memory, so that it doesn't pull the rug from under the feet of some process [or itself].
I'm allocating memory using new as and when I receive data in one of my methods and in the destructor, I'm releasing all the allocated memory using delete.
However, after releasing the memory, from the task manager, when I look at mem usage under the process tab, the memory usage still remains the same. It doesn't give an impression that the memory is being released.
So, when does the memory actually get released? And what is the best way to find out the actual memory being used by a process.
Thanks
In most cases, it's never given back to the OS while the app is running. Afterwards, of course, all resources are recovered by the OS.
[Edited after the comments rightly pointed out that 'never' is a long time ...]
The OS allocates a default heap to your application. This heap is allocated during the initialization of your process. Thus, your new's and delete's won't affect the bar you see in the task manager.
However, if you try to initialize a big buffer and the allocated heap isn't enough, the OS will allocate more memory for your application - and this should be reflected in the task manager...
I'm currently working on an exception-based error reporting system for Windows MSVC++ (9.0) apps (i.e. exception structures & types / inheritance, call stack, error reporting & logging and so on).
My question now is: how to correctly report & log an out-of-memory error?
When this error occurs, e.g. as an bad_alloc thrown by the new op, there may be many "features" unavailable, mostly concerning further memory allocation. Normally, I'd pass the exception to the application if it has been thrown in a lib, and then using message boxes and error log files to report and log it. Another way (mostly for services) is to use the Windows Event Log.
The main problem I have is to assemble an error message.
To provide some error information, I'd like to define a static error message (may be a string literal, better an entry in a message file, then using FormatMessage) and include some run-time info such as a call stack.
The functions / methods necessary for this use either
STL (std::string, std::stringstream, std::ofstream)
CRT (swprintf_s, fwrite)
or Win32 API (StackWalk64, MessageBox, FormatMessage, ReportEvent, WriteFile)
Besides being documented on the MSDN, all of them more (Win32) or less (STL) closed source in Windows, so I don't really know how they behave under low memory problems.
Just to prove there might be problems, I wrote a trivial small app provoking a bad_alloc:
int main()
{
InitErrorReporter();
try
{
for(int i = 0; i < 0xFFFFFFFF; i++)
{
for(int j = 0; j < 0xFFFFFFFF; j++)
{
char* p = new char;
}
}
}catch(bad_alloc& e_b)
{
ReportError(e_b);
}
DeinitErrorReporter();
return 0;
}
Ran two instances w/o debugger attached (in Release config, VS 2008), but "nothing happened", i.e. no error codes from the ReportEvent or WriteFile I used internally in the error reporting. Then, launched one instance with and one w/o debugger and let them try to report their errors one after the other by using a breakpoint on the ReportError line. That worked fine for the instance with the debugger attached (correctly reported & logged the error, even using LocalAlloc w/o problems)! But taskman showed a strange behaviour, where there's a lot of memory freed before the app exits, I suppose when the exception is thrown.
Please consider there may be more than one process [edit] and more than one thread [/edit] consuming much memory, so freeing pre-allocated heap space is not a safe solution to avoid a low memory environment for the process which wants to report the error.
Thank you in advance!
"Freeing pre-allocated heap space...". This was exactly that I thought reading your question. But I think you can try it. Every process has its own virtual memory space. With another processes consuming a lot of memory, this still may work if the whole computer is working.
pre-allocate the buffer(s) you need
link statically and use _beginthreadex instead of CreateThread (otherwise, CRT functions may fail) -- OR -- implement the string concat / i2a yourself
Use MessageBox (MB_SYSTEMMODAL | MB_OK) MSDN mentions this for reporting OOM conditions (and some MS blogger described this behavior as intended: the message box will not allocate memory.)
Logging is harder, at the very least, the log file needs to be open already.
Probably best with FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH, to avoid any buffering attempts. The first one requires that writes and your memory buffers are sector aligned (i.e. you need to query GetDiskFreeSpace, align your buffer by that, and write only to "multiple of sector size" file offsets, and in blocks that are multiples of sector size. I am not sure if this is necessary, or helps, but a system-wide OOM where every allocation fails is hard to simulate.
Please consider there may be more than one process consuming much memory, so freeing pre-allocated heap space is not a safe solution to avoid a low memory environment for the process which wants to report the error.
Under Windows (and other modern operating systems), each process has its own address space (aka memory) separate from every other running process. And all of that is separate from the literal RAM in the machine. The operating system has virtualized the process address space away from the physical RAM.
This is how Windows is able to push memory used by processes into the page file on the hard disk without those processes having any knowledge of what happened.
This is also how a single process can allocate more memory than the machine has physical RAM and yet still run. For instance, a program running on a machine with 512MB of RAM could still allocate 1GB of memory. Windows would just couldn't keep all of it in the RAM at the same time and some of it would be in the page file. But the program wouldn't know.
So consequently, if one process allocates memory, it does not cause another process to have less memory to work with. Each process is separate.
Each process only needs to worry about itself. And so the idea of freeing a pre-allocated chunk of memory is actually very viable.
You can't use CRT or MessageBox functions to handle OOM since they might need memory, as you describe. The only truly safe thing you can do is alloc a chunk of memory at startup you can write information into and open a handle to a file or a pipe, then WriteFile to it when you OOM out.