Use cases of mmap - c++

I'm currently studying for my OS finals. The teacher in some papers is briefly mentioning the mmap function (memory map).
As I understand it (correct me if i'm wrong), mmap is used to load some files from the physical memory to the RAM (after a page default). The problem is that I don't see any practical reason for this other then to make the access time to that file faster.
Am I correct? Is mmap only used for this?

"mmap" has lots of purposes:
Mapping a file for faster read/write access is certainly one use
Shared memory (e.g. for interprocess communications) is another
mmap is also used to map I/O port addresses for low-level device communications

mmap is used to load some files from the physical memory to the RAM (after a page default)
to load the missing pages. also modifications can be written to the disk the same way!
Performance (you don't have to load the whole file), works really well if you have random access.
It can considerably make your code more compact, you don't have to worry about file I/O.
The OS can handle memory management, decide which pages to keep in memory and which to discard.

In addition to #paulsm4's answer:
...
...
...
Most modern malloc(3) implementations use mmap(2) to manage private process memory.
Dynamic link-loader ld.so(8) uses it for mapping shared libraries.

mmap takes memory management out of the hands of the programmer to a large extent, and puts it in the hands of the OS.
It's about demand paging using the virtual memory subsystem from disk to physical memory.
So to look at the 11111th byte of a file, instead of seeking and reading, you can mmap and use an array index. The OS will keep surroundiung data in its "buffer cache" (page cache really).
Here's an example:
http://stromberg.dnsalias.org/~strombrg/pbmonherc.html
The example's a little messy because it was written at a time when Linux had mmap support in its kernel, but the C library didn't yet have a stub for calling it. But you can pretty much ignore mmap.c. The example uses mmap to set pixels on and off using a monochromatic display adapter.
Another reasonable use is for a bloom filter:
http://stromberg.dnsalias.org/~strombrg/drs-bloom-filter/
...but on 32 bit OS's, the maximum size of an mmap'd memory region kinda hurts.

Related

C++ memory mapped files [duplicate]

POSIX environments provide at least two ways of accessing files. There's the standard system calls open(), read(), write(), and friends, but there's also the option of using mmap() to map the file into virtual memory.
When is it preferable to use one over the other? What're their individual advantages that merit including two interfaces?
mmap is great if you have multiple processes accessing data in a read only fashion from the same file, which is common in the kind of server systems I write. mmap allows all those processes to share the same physical memory pages, saving a lot of memory.
mmap also allows the operating system to optimize paging operations. For example, consider two programs; program A which reads in a 1MB file into a buffer creating with malloc, and program B which mmaps the 1MB file into memory. If the operating system has to swap part of A's memory out, it must write the contents of the buffer to swap before it can reuse the memory. In B's case any unmodified mmap'd pages can be reused immediately because the OS knows how to restore them from the existing file they were mmap'd from. (The OS can detect which pages are unmodified by initially marking writable mmap'd pages as read only and catching seg faults, similar to Copy on Write strategy).
mmap is also useful for inter process communication. You can mmap a file as read / write in the processes that need to communicate and then use synchronization primitives in the mmap'd region (this is what the MAP_HASSEMAPHORE flag is for).
One place mmap can be awkward is if you need to work with very large files on a 32 bit machine. This is because mmap has to find a contiguous block of addresses in your process's address space that is large enough to fit the entire range of the file being mapped. This can become a problem if your address space becomes fragmented, where you might have 2 GB of address space free, but no individual range of it can fit a 1 GB file mapping. In this case you may have to map the file in smaller chunks than you would like to make it fit.
Another potential awkwardness with mmap as a replacement for read / write is that you have to start your mapping on offsets of the page size. If you just want to get some data at offset X you will need to fixup that offset so it's compatible with mmap.
And finally, read / write are the only way you can work with some types of files. mmap can't be used on things like pipes and ttys.
One area where I found mmap() to not be an advantage was when reading small files (under 16K). The overhead of page faulting to read the whole file was very high compared with just doing a single read() system call. This is because the kernel can sometimes satisify a read entirely in your time slice, meaning your code doesn't switch away. With a page fault, it seemed more likely that another program would be scheduled, making the file operation have a higher latency.
mmap has the advantage when you have random access on big files. Another advantage is that you access it with memory operations (memcpy, pointer arithmetic), without bothering with the buffering. Normal I/O can sometimes be quite difficult when using buffers when you have structures bigger than your buffer. The code to handle that is often difficult to get right, mmap is generally easier. This said, there are certain traps when working with mmap.
As people have already mentioned, mmap is quite costly to set up, so it is worth using only for a given size (varying from machine to machine).
For pure sequential accesses to the file, it is also not always the better solution, though an appropriate call to madvise can mitigate the problem.
You have to be careful with alignment restrictions of your architecture(SPARC, itanium), with read/write IO the buffers are often properly aligned and do not trap when dereferencing a casted pointer.
You also have to be careful that you do not access outside of the map. It can easily happen if you use string functions on your map, and your file does not contain a \0 at the end. It will work most of the time when your file size is not a multiple of the page size as the last page is filled with 0 (the mapped area is always in the size of a multiple of your page size).
In addition to other nice answers, a quote from Linux system programming written by Google's expert Robert Love:
Advantages of mmap( )
Manipulating files via mmap( ) has a handful of advantages over the
standard read( ) and write( ) system calls. Among them are:
Reading from and writing to a memory-mapped file avoids the
extraneous copy that occurs when using the read( ) or write( ) system
calls, where the data must be copied to and from a user-space buffer.
Aside from any potential page faults, reading from and writing to a memory-mapped file does not incur any system call or context switch
overhead. It is as simple as accessing memory.
When multiple processes map the same object into memory, the data is shared among all the processes. Read-only and shared writable
mappings are shared in their entirety; private writable mappings have
their not-yet-COW (copy-on-write) pages shared.
Seeking around the mapping involves trivial pointer manipulations. There is no need for the lseek( ) system call.
For these reasons, mmap( ) is a smart choice for many applications.
Disadvantages of mmap( )
There are a few points to keep in mind when using mmap( ):
Memory mappings are always an integer number of pages in size. Thus, the difference between the size of the backing file and an
integer number of pages is "wasted" as slack space. For small files, a
significant percentage of the mapping may be wasted. For example, with
4 KB pages, a 7 byte mapping wastes 4,089 bytes.
The memory mappings must fit into the process' address space. With a 32-bit address space, a very large number of various-sized mappings
can result in fragmentation of the address space, making it hard to
find large free contiguous regions. This problem, of course, is much
less apparent with a 64-bit address space.
There is overhead in creating and maintaining the memory mappings and associated data structures inside the kernel. This overhead is
generally obviated by the elimination of the double copy mentioned in
the previous section, particularly for larger and frequently accessed
files.
For these reasons, the benefits of mmap( ) are most greatly realized
when the mapped file is large (and thus any wasted space is a small
percentage of the total mapping), or when the total size of the mapped
file is evenly divisible by the page size (and thus there is no wasted
space).
Memory mapping has a potential for a huge speed advantage compared to traditional IO. It lets the operating system read the data from the source file as the pages in the memory mapped file are touched. This works by creating faulting pages, which the OS detects and then the OS loads the corresponding data from the file automatically.
This works the same way as the paging mechanism and is usually optimized for high speed I/O by reading data on system page boundaries and sizes (usually 4K) - a size for which most file system caches are optimized to.
An advantage that isn't listed yet is the ability of mmap() to keep a read-only mapping as clean pages. If one allocates a buffer in the process's address space, then uses read() to fill the buffer from a file, the memory pages corresponding to that buffer are now dirty since they have been written to.
Dirty pages can not be dropped from RAM by the kernel. If there is swap space, then they can be paged out to swap. But this is costly and on some systems, such as small embedded devices with only flash memory, there is no swap at all. In that case, the buffer will be stuck in RAM until the process exits, or perhaps gives it back withmadvise().
Non written to mmap() pages are clean. If the kernel needs RAM, it can simply drop them and use the RAM the pages were in. If the process that had the mapping accesses it again, it cause a page fault the kernel re-loads the pages from the file they came from originally. The same way they were populated in the first place.
This doesn't require more than one process using the mapped file to be an advantage.

Do memory mapped files provide advantage for large buffers?

My program works with large data sets that need to be stored in contiguous memory (several Gigabytes). Allocating memory using std::allocator (i.e. malloc or new) causes system stalls as large portions of virtual memory are reserved and physical memory gets filled up.
Since the program will mostly only work on small portions at a time, my question is if using memory mapped files would provide an advantage (i.e. mmap or the Windows equivalent.) That is creating a large sparse temporary file and mapping it to virtual memory. Or is there another technique that would change the system's pagination strategy such that less pages are loaded into physical memory at a time.
I'm trying to avoid building a streaming mechanism that loads portions of a file at a time and instead rely on the system's vm pagination.
Yes, mmap has the potential to speed things up.
Things to consider:
Remember the VMM will page things in and out in page size blocked (4k on Linux)
If your memory access is well localised over time, this will work well. But if you do random access over your entire file, you will end up with a lot of seeking and thrashing (still). So, consider whether your 'small portions' correspond with localised bits of the file.
For large allocations, malloc and free will use mmap with MAP_ANON anyway. So the difference in memory mapping a file is simply that you are getting the VMM to do the I/O for you.
Consider using madvise with mmap to assist the VMM in paging well.
When you use open and read (plus, as erenon suggests, posix_fadvise), your file is still held in buffers anyway (i.e. it's not immediately written out) unless you also use O_DIRECT. So in both situations, you are relying on the kernel for I/O scheduling.
If the data is already in a file, it would speed up things, especially in the non-sequential case. (In the sequential case, read wins)
If using open and read, consider using posix_fadvise as well.
This really depends on your mmap() implementation. Mapping a file into memory has several advantages that can be exploited by the kernel:
The kernel knows that the contents of the mmap() pages is already present on disk. If it decides to evict these pages, it can omit the write back.
You reduce copying operations: read() operations typically first read the data into kernel memory, then copy it over to user space.
The reduced copies also mean that less memory is used to store data from the file, which means more memory is available for other uses, which can reduce paging as well.
This is also, why it is generally a bad idea to use large caches within an I/O library: Modern kernels already cache everything they ever read from disk, caching a copy in user space means that the amount of data that can be cached is actually reduced.
Of course, you also avoid a lot of headaches that result from buffering data of unknown size in your application. But that is just a convenience for you as a programmer.
However, even though the kernel can exploit these properties, it does not necessarily do so. My experience is that LINUX mmap() is generally fine; on AIX, however, I have witnessed really bad mmap() performance. So, if your goal is performance, it's the old measure-compare-decide stand by.

Write Only Memory Mapping in boost?

Why doesn't boost interprocess support write only memory mapping?
Maybe I'm missing something but wouldn't a write only mapping be significantly faster than a read/write mapping as the OS doesn't have to read in the pages from the disk, just flush out pages from memory to the disk? Also it would have the benefit of being entirely non blocking (except for flushing and destruction).
Would I benefit by switching from boost to native OS memory mapping?
In fact if you allocate a new memory-mapped file of size, say, 20Gb, you'll get a sparse file allocation of that size.
When "mapping in" pages of that files, there need to be a read operation (as the OS might be able to tell that the page is not physically present yet on disk), and only when (if) those pages are dirtied need they be written out.
Of course, this is implementation dependent and I don't think POSIX (can) guarantee this, but it's not unreasonable behaviour IYAM, and would be the equivalent of write-only mapping.
Actually, a write-only memmap would not be faster, as the OS can only keep track of changes / provide those mappings in whole-page granularity.
At least, if you want to avoid the prohibitive cost of simulating all access to such pages in kernel-land (not implemented) instead of just mapping a page.
Somehow, I doubt going directly to the OS API instead of going through the Boost-API could provide any significant speed-ups:
The boost API is a thin wrapper over the OS-specific interface and will be completely inlined and thus compiled out by any decent compiler.

Is memory-mapped memory possible?

I know that is possible to use memory-mapped files i.e. real files on disk that are transparently mapped to memory. As far as I understand (I haven't used these yet) the mapping takes place immediately, the file is partly read on the first memory access while the OS starts "caching" the whole file in the background.
Now: Is it possible to somewhat abuse this concept and memory-map another block of memory? Assuming the OS provides such indirection one could create a kind of compressed_malloc() that returns a mapping from memory to memory. The memory returned to the caller is simple the memory-mapped range that is transparently compressed in memory and also eventually kept in memory. Thus, for large buffers it could be possible that only part of it get decompressed on-the-fly (on access) while the remaining blocks are kept compressed.
Is that concept technically possible at the moment or - if already realized (in software) - what are the things to look at?
Update 1: I am more or less looking for something that is technically achievable without modifying the OS kernel itself or which requires a virtualization platform.
Update 2: I am hoping for something which allows me to implement the compression and related logic in my own user-space code. I would just use the facilities of the operating system to create the memory-mapping.
Very much so. The VM (Virtual Memory) system is designed to handle different kinds of objects that can be mapped. There is in fact a filesystem call cramfs that does something similar in the sense that it keeps compressed data in storage, but enables transparent, uncompressed access.
You would not be modifying the kernel per se, but you will have to work in the kernel space, implementing VM handlers for this new kind of a memory mapped object.
This is possible, eg.
http://pubs.vmware.com/vsphere-4-esx-vcenter/index.jsp?topic=/com.vmware.vsphere.resourcemanagement.doc_41/managing_memory_resources/c_memory_compression.html
It is not correctly implemented in kernel space in Linux, but something like this could be implemented in user space.

Reserving Shared Memory with No File Backing (Linux/Windows) (boost::interprocess)

How can I reserve and allocate shared memory without the backing of a file? I'm trying to reserve a large (many tens of GiBs) chunk of shared memory and use it in multiple processes as a form of IPC. However, most of this chunk won't be touched at all (the access will be really sparse; maybe a few hundred megabytes throughout the life of the processes) and I don't care about the data when the applications end.
So preferably, the method to do this should have the following properties:
Doesn't commit the whole range. I will choose which parts to commit (actually use.) (But the pattern is quite unpredictable.)
Doesn't need a memory-mapped file or anything like that. I don't need to preserve the data.
Lets me access the memory area from multiple processes (I'll handle the locking explicitly.)
Works in both Linux and Windows (obviously a 64-bit OS is needed.)
Actually uses shared memory. I need the performance.
(NEW) The OS or the library doesn't try to initialize the reserved region (to zero or whatever.) This is obviously impractical and unnecessary.
I've been experimenting with boost::interprocess::shared_memory_object, but that causes a large file to be created on the filesystem (with the same size as my mapped memory region.) It does remove the file afterwards, but that hardly helps.
Any help/advice/pointers/reference is appreciated.
P.S. I do know how to do this on Windows using the native API. And POSIX seems to have the same functionality (only with a cleaner interface!) I'm looking for a cross-platform way here.
UPDATE: I did a bit of digging, and it turns out that the support that I thought existed in Windows was only a special case of memory-mapped files, using the system page file as a backing. (I'd never noticed it before because I had used at most a few megabytes of shared memory in the past projects.)
Also, I have a new requirement now (the number 6 above.)
On Windows, all memory has to be backed by the disk one way or another.
The closest I think you can accomplish on windows would be to memory map a sparse file. I think this will work in your case for two reasons:
On Windows, only the shared memory that you actually touch will become resident. This meets the first requirement.
Because the file is sparse, only the parts that have been modified will actually be stored on disk. If you looked at the properties of the file, it would say something like, "Size: 500 MB, Size on disk: 32 KB".
I realize that this technically doesn't meet your 2nd requirement, but, unfortunately, I don't think that is possible on Windows. At least with this approach, only the regions you actually use will take up space. (And only when Windows decides to commit the memory to disk, which it may or may not do at its discretion.)
As for turning this into a cross-platform solution, one option would be to modify boost::interprocess so that it creates sparse files on Windows. I believe boost::interprocess already meets your requirements on Linux, where POSIX shared memory is available.