Do I have to munmap() a mmap() file? - c++

I have relatively new to C++ and I am learning from another guy's code.
His code reads from a mmapped file, but does not free any mapped memory in the end. In my understanding, mmap() map files into virtual memory. Don't I need to release those mapped memory in some way, like, calling munmap()?

I believe you should release mapped memory with munmap. But it will be released automatically (like close syscall for regular files or sockets) after exit(). Remember, that implicit closing/unmapping is bad style!

When you are done just use munmap() unless your program is exiting, then there is no need, it will unmap the segment(s) automatically at exit.

munmap happens automatically on exit
So if the program is going to exit anyways, you don't really need to do it.
man munmap 4.15 says:
The munmap() system call deletes the mappings for the specified address range, and causes further references to addresses within the range to generate invalid memory references. The region is also
automatically unmapped when the process is terminated. On the other hand, closing the file descriptor does not unmap the region.
If the program doesn't exit, of course, you leak memory, just as with malloc (which nowadays uses mmap).

Related

Clear strings from process memory

To improve the security of my application, I am trying to delete string data from the process memory, but since there is little information about this on the Internet, I could not write a working code.
Can anyone help me?
My pasted code:
void MemoryStringsClear() {
HANDLE hProc = OpenProcess(PROCESS_ALL_ACCESS, FALSE, GetCurrentProcessId());
MEMORY_BASIC_INFORMATION mbi;
char* addr = 0;
while (VirtualQueryEx(hProc, addr, &mbi, sizeof(mbi)))
{
if (mbi.State != MEM_COMMIT || mbi.Protect == PAGE_NOACCESS)
{
//char* buffer = new char[mbi.RegionSize];
//ReadProcessMemory(hProc, addr, buffer, mbi.RegionSize, nullptr);
if (addr) {
cout << "Addr: " << &addr << " is cleared!" << endl;
memset(addr, '0', mbi.RegionSize);
}
}
addr += mbi.RegionSize;
}
CloseHandle(hProc);
}
EDITED:
I chose this way of solving the problem because my application consists of many modules (.exe applications), some of which I cannot change.
There are some problems with your approach (my idea for a solution is further down):
Most of the strings listed are environment variables
All of the programs that run on your computer have access to those. They are copied to the memory space of every program on startup so every program knows where to look for certain files. There is no point in removing them from the memory of your application, since every application running on your computer already knows them.
You can see them by running cmd.exe, typing set and then pressing return.
OpenProcess and VirtualQueryEx are for accessing another process
You could simply use VirtualQuery, since you only want to access your own process.
I guess you are trying to get access to non-committed memory pages by doing this, but memset can only access committed, writable memory pages in your own program's address space. So those two approaches don't mix.
But there is a more important point to this:
Non-committed memory does not exist
If a memory page is not committed, there is no actual memory assigned to that address. That means, that there is nothing you can overwrite with zeroes. The memory containing your strings may already have been assigned to another application. Read some information about virtual memory management for details.
Most calls to free, delete or garbage collection do not always actually decommit the page
For efficiency reasons, when your code allocates and deallocates memory, your runtime library hands you down little scraps of a larger page of memory (called "heap") that is only decommitted if every single piece in it has been freed.
You could find freed blocks of memory by walking over the heap entries, but how that works depends on your C runtime library or other runtime libraries.
The operating system might move your strings around
If the operating systems detects that there is a shortage of memory, it can save your strings to disk to free up memory for other applications, and reloads them when your application again becomes active. It usually does not bother to clean the disk up afterwards. You have no influence on that (unless you format your hard drive).
My ideas for a solution
Before every call to free or delete in your code that frees
memory with sensitive information (and only those), you can call
memset(...) on that single block of memory. In C++, you can wrap that up in a class which clears its memory on destruction, as Alan Birtles pointed out in his comment.
I don't think there is a solution that you can simply pop onto an existing program that clears sensitive information after the memory has been freed.
This approach leaves only the last problem. You can only circumvent that if you never store your sensitive information unencrypted in memory. That is probably not feasible since that would mean that you do not handle it only encrypted.
What will be difficult or impossible
If you want to clear freed memory in other processes (the separate *.exe files you cannot change you refer to in your edit), you have to understand the internal heap layout of those and use WriteProcessMemory instead of memset.
But this does not catch the case where the other program actually decommits a page, since you do not know if the operating system has already reassigned it. When this happens is completely outside of your control.
You might also try to reimplement the free and delete functions in your C runtime library so they first clear the memory and then call the original version, but this only works if they are actually used by those *.exe files and they are dynamically linked. If these conditions are met, you might still have a hard time.
Define the security threats you want to protect against
To improve the security of my application,
What exactly are you trying to guard against? Have you verified that clearing process memory will actually work against the security attacks that you want to defend against?
Know how memory works
Find out how your operating system allocates both virtual and physical memory, otherwise wrong assumptions of how it works might cause you to implement ineffective solutions. Most computers systems use virtual memory, which means some of your memory might actually end up being copied to different places in physical RAM or to disk. On the other hand, if your process exits and a new process starts, most operating systems will clear the RAM used by the first process before assigning it to the second.
Ensure you have full control over the memory you want to clear
As Iziminza already mentioned, your process has virtual memory, but the operating system can choose how to back that virtual memory with physical memory. When it needs RAM for some other process, it can decide to move your data to a swap file on disk until it is needed again. In order to make clearing of memory using memset() meaningful, you must ensure there are no copies stored elsewhere. You can do this by using VirtualLock() on Windows, or mlock() on other operating systems. Even then, if the computer is going into hibernation mode, even locked memory is written to disk.

Why does malloc() or new never return NULL? [duplicate]

This question already has answers here:
SIGKILL while allocating memory in C++
(2 answers)
Closed 9 years ago.
I'm writing an application which needs a lot of memory for caching purposes as I described he here. Now I'm playing around with some malloc / new constructions to figure out how I could realise it. I made a strange observation:
#include <stdio.h>
#include <stdlib.h>
int main(void) {
while(1) {
char *foo = (char*)malloc(1024);// new char[1024];
if(foo == NULL) {
printf("Couldn't alloc\n");
fflush(stdout);
return 0;
}
}
return 0;
}
Why does that printf never be reached? If my system runs out of memory, malloc is said to return NULL, as it is explained here. But I always receive SIGKILL (I'm using linux...).
Linux, by default, usually uses an opportunistic memory allocation scheme, meaning the kernel will give you a valid address that won't be allocated until first use.
See:
SIGKILL while allocating memory
C Program on Linux to exhaust memory
According to those responses you can turn this feature off using echo 2 > /proc/sys/vm/overcommit_memory.
From what I can tell, this is done under the assumption that you wont necessarily use all the memory that you allocate. I can't say that I personally ever allocate space that I don't touch at least once, so I'd be curious to know how this affects real life performance...
Regarding the SIGKILL failure, every malloc you call is still allocating some memory for each call. Eventually you will likely fill your memory with malloc overhead and thus evoke the fury of the out of memory kill feature. Whether this alone is the issue or if perhaps the overcommit policy still allocates some fraction of the requested space is a good question.
Usually, Linux will allocate as much (virtual) memory as you request, and only allocate physical memory for it when it's needed. If the system runs out of physical memory, then it starts killing processes to free some.
This means that malloc will succeed unless the request is ludicrous, but your process (or some other) is likely to get killed as the memory is used.
For more details, see the manpage for malloc, and its references:
By default, Linux follows an optimistic memory allocation strategy.
This means that when malloc() returns non-NULL there is no guarantee
that the memory really is available. In case it turns out that the
system is out of memory, one or more processes will be killed by the
OOM killer. For more information, see the description of
/proc/sys/vm/overcommit_memory and /proc/sys/vm/oom_adj in proc(5), and
the kernel source file Documentation/vm/overcommit-accounting.
(And of course new won't return null anyway unless you use the no-throwing version).
malloc returns NULL if requested allocation cannot be fulfilled. But maybe you should try allocating tons of space from heap.
See here.
On linux, the only way to get an error when calling malloc() is to
disable memory-overcommiting. On regular linux systems, this is the
only way for malloc() to return NULL. If an icecast process reaches
that point, it's screwed anyway it won't be able to do anything
meaningful: any source reads will fail (refbuf alloc), any log print
will also fail (printf uses malloc too), so it might as well give up and
call abort().
Malloc would return NULL, if the operating system let your program run that long. But before malloc gets a chance to run the operating system kills your process.
Just like it kills your process if it detected that you were writing outside memory pages allocated to your process.

Heap memory management of child process upon forkpty() and execl()?

I have a C++ app I'm developing in Linux. I'm allocating some dynamic memory and ultimately calling forkpty(). The child process is calling execl() and as we know, execl() never returns if it succeeds to execute the command. Furthermore, as we know, forkpty() makes a copy of all the parent's data. So, if the child() process never returns control back to my application in order to ultimately do memory cleanup, is it safe to say one better not have any dynamic memory allocated at the time execl() is called from the child process??? I can't believe I could not find this one on here... Thanks in advance.
Allocated memory is part of the process image; when you call
execl, the entire process image is replaced, and any memory in
it simply "disappears" like the rest of it, returning to the OS,
which will then use it elsewhere.
All of the "forked" process memory is freed as part of execl() (if the call is successful).
If this wasn't the case, there would be a lot of memory leaks all over a regular linux system, as it's almost impossible to write anything even a little complex without allocating memory, and, for example, if the arguments to execl() are allocated, you couldn't possibly free them before calling execl().

How is dynamically allocated space freed when a program is interrupted using Ctrl-C?

Given the following code:
#include <stdio.h>
int main()
{
int *p;
p = (int *)malloc(10 * sizeof(int));
while(1);
return 0;
}
When the above code is compiled and run, and is interrupted while in execution by pressing Ctrl+C, how is the memory allocated to p freed? What is the role of the Operating System here? And how is it different from that in case Of C++, done using the new operator?
When a process terminates, the operating system reclaims all the memory that the process was using.
The reason why people make a big deal out of memory leaks even when the OS reclaims the memory your app was using when it terminates is that usually non-trivial applications will run for a long time slowly gobbling up all the memory on the system. It's less of a problem for very short-lifetime programs. (But you can never tell when a one-liner will become a huge program, so don't have any memory leaks even in small programs.)
By the way (in addition to Seth Carnegie said):
Using the routines in <signal.h> you can catch the SIGINT signal (interrupt) to handle Ctrl+C in any way, for example to clean up any important resources, not only the memory (like closing files, thus avoiding the loss of any buffered and not-yet-written content, or closing network connections gently).
The full explanation of _exit is here:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/_Exit.html
The same things happen when a process terminates as a result of a fatal signal.
the memory is actually not "free()"d at all.
memory acquired by the operating system is page size (4kbytes of memory usually). whenever a process runs out of memory it acquires additional pages, these are the space malloc() actually uses. when a process terminates all pages are returned to the operating system making calling free actually unnecessary. if your programme is a server or similar every piece of memory that is never freed will only be returned when the programme is actually killed - making it every more memory hungry.

Quick successful exit from C++ with lots of objects allocated

I'm looking for a way to quickly exit a C++ that has allocated a lot of structures in memory using C++ classes. The program finishes correctly, but after the final "return" in the program, all of the auto-destructors kick in. The problem is the program has allocated about 15GB of memory through lots of C++ class structures, and this auto-destruct process takes about 1 more hour itself to complete as it walks through all of the structures - even though I don't care about the results. The program only took 1 hour to complete the task up to this point. I would like to just return to the OS and let it do its normal wholesale process allocation deletion - which is very quick. I've been doing this by manually killing the process during the cleanup stage - but am looking for a better programic solution.
I would like to return a success to the OS, but don't care to keep any of the memory content. The program does perform a lot of dynamic allocation/deallocation during the normal processing, so it's not just simple heap management.
Any opinions?
In Standard C++ you only have abort(), but that has the process return failure to the OS.
On many platforms (Unix, MS Windows) you can use _exit() to exit the program without running cleanup and destructors.
C++0x std::quick_exit is what you are looking for if your compiler already supports it (g++-4.4.5 does).
If the 15 GB of memory is being allocated to a reasonably small number of classes, you could override operator delete for those classes. Just pass the call to the standard delete, but set up a global flag that, if set, will make the call to delete a no-op. Or, if the logic of your program is such that these objects are not deleted in the normal course of building your data structures, you could simply ignore delete in all cases for these classes.
As Naveen says, this can't be a matter of memory deallocation. I've written neural network simulations with evolutionary algorithms that where allocating and freed lots of memory in small and large chunks and this was never a major issue.
If you have a C99 compiler, you can use the _Exit function to end immediately without having global object destructors or any functions registered with atexit to be called; whether or not unwritten buffered file data is flushed, open streams are closed, or temporary files are removed is implementation-defined (C99 ยง7.20.4.4).
If you're on Windows, you can also use ExitProcess to achieve the same effect.
But, as others have said, your destructors should really not be taking an hour to run unless you're doing a fair amount of I/O (writing log files, etc.). I strongly, strongly recommend you profile your program to see where the time is spent.
The possible strategies depend on the number of objects that are directly visible in main through which you access the 15GB of data and if these are local to main or statically allocated.
If all access to the 15GB of data is through local objects in main, then you can simply replace the return 0; at the end of main with exit(0);.
exit will terminate your application and trigger cleanup of statically allocated variables, but not of local variables.
If the data is accessed through a handful of statically allocated variables, you could turn them into pointers (or references) to dynamically allocated memory and deliberately leak that.