Interprocess communication heap memory - c++

Can heap memory be shared between 2 different processes? In boost interprocess documentation,there is a statement that managed heap memory does not create system wide resources

Nothing is foreseen in the C++ standard for shared memory between processes. In fact, each process runs in its own address space and manages its dynamic memory in that address space, so there is no way to share the "heap".
However, operating systems give you some means to share memory between processes. The best known way is the use of memory mapped files, which are supported across a wide range of OS, but in an OS specific way, so non portable. boost proposes a portable implementation which hides the OS specific part.
You may very well use the memory area obtained in this way for your objects. You could use placement new to instantiate objects. You could even create a custom allocator to create dynamic objects in this memory area.
However, this requires extra care, since you have to take into consideration IPC synchronisation to avoid races, and you need to remember that any pointer created by one process is garbage for the other process (since it runs in antoher, independent address space).

Related

What decides where on the heap memory is allocated?

Let me clear up: I understand how new and delete (and delete[]) work. I understand what the stack is, and I understand when to allocate memory on the stack and on the heap.
What I don't understand, however, is: where on the heap is memory allocated. I know we're supposed to look at the heap as this big pool of pretty much limitless RAM, but surely that's not the case.
What is in control of choosing where on the heap memory is stored and how does it choose that?
Also: the term "returning memory to the OS" is one I come across quite often. Does this mean that the heap is shared between all processes?
The reason I care about all this is because I want to learn more about memory fragmentation. I figured it'd be a good idea to know how the heap works before I learn how to deal with memory fragmentation, because I don't have enough experience with memory allocation, nor C++ to dive straight into that.
The memory is managed by the OS. So the answer depends on the OS/Plattform that is used. The C++ specification does not specify how memory on a lower level is allocated/freed, it specifies it in from of the lifetime.
While multi-user desktop/server/phone OS (like Windows, Linux, macOS, Android, …) have similar ways to how memory is managed, it could be completely different on embedded systems.
What is in control of choosing where on the heap memory is stored and how does it choose that?
Its the OS that is responsible for that. How exactly depends - as already said - on the OS. The OS could also be a thin layer in the form of a combination of the runtime library and minimal OS like includeos
Does this mean that the heap is shared between all processes?
Depends on the point of view. The address space is - for multiuser systems - in general not shared between processes. The OS ensures that one process cannot access memory of another process, which is ensured through virtual address spaces. But the OS can distribute the whole RAM among all processes.
For embedded systems, it could even be the case, that each process has a fixed amount a preallocated memory - that is not shared between processes - and with no way to allocated new memory or free memory. And then it is up to the developer to manage that preallocated memory by themselves by providing custom allocators to the objects of the stdlib, and to construct in allocated storage.
I want to learn more about memory fragmentation
There are two ways of fragmentation. The one is given by the memory addresses exposed by the OS to the C++ runtime. And the one on the hardware/OS side (which could be the same for embedded system) . How and in which form the memory might be fragmented organized by the OS can't be determined using the function provided by the stdlib. And how the fragmentation of the address spaces of the process behaves, depends again on the os and the also on the used stdlib.
None of these details are specified in the C++ specification standard. Each C++ implementation is free to implement these details in whichever way that works for it, as long as the end result is agreeable with the standard.
Each C++ compiler, and operating system implements these low level details in its own unique way. There is no specific answer to these questions that apply to every C++ compiler and every operating system. Over time, a lot of research has went into profiling and optimizing memory allocation and deallocation algorithms for a typical C++ application, and there are some tailored C++ implementation that offer alternative memory allocation algorithm that each application will choose, that it thinks will work best for it. Of course, none of this is covered by the C++ standard.
Of course, all memory in your computer must be shared between all processes that are running on it, and your operating system is responsible for divvying it up and parceling it out to all the processes, when they request more memory. All that "returning memory to the OS" means is that a process's memory allocator has determined that it no longer needs a sufficiently large continuous memory range, that was used before but not any more, and notifies the operating system that it no longer uses it and it can be reassigned to another process.
What decides where on the heap memory is allocated?
From the perspective of a C++ programmer: It is decided by the implementation (of the C++ language).
From the perspective of a C++ standard library implementer (as an example of what may hypothetically be true for some implementation): It is decided by malloc which is part of the C standard library.
From the perspective of malloc implementer (as an example of what may hypothetically be true for some implementation): The location of heap in general is decided by the operating system (for example, on Linux systems it might be whatever address is returned by sbrk). The location of any individual allocation is up to the implementer to decide as long as they stay within the limitations established by the operating system and the specification of the language.
Note that heap memory is called "free store" in C++. I think this is to avoid confusion with the heap data structure which is unrelated.
I understand what the stack is
Note that there is no such thing as "stack memory" in the C++ language. The fact that C++ implementations store automatic variables in such manner is an implementation detail.
The heap is indeed shared between processes, but in C++ the delete keyword does not return the memory to the operating system, but keeps it to reuse later on. The location of the allocated memory is dependent on how much memory you want to access, there has to be enough space and how the OS handles memory allocations, it can be one on first, best and worst hit (Read more on that topic on google). The name RAM basically tells you where to search for your memory :D
It is however possible to get the same memory location when you have a small program and restart it multiple times.

Is access to the heap serialized?

One rule every programmer quickly learns about multithreading is:
If more than one thread has access to a data structure, and at least one of threads might modify that data structure, then you'd better serialize all accesses to that data structure, or you're in for a world of debugging pain.
Typically this serialization is done via a mutex -- i.e. a thread that wants to read or write the data structure locks the mutex, does whatever it needs to do, and then unlocks the mutex to make it available again to other threads.
Which brings me to the point: the memory-heap of a process is a data structure which is accessible by multiple threads. Does this mean that every call to default/non-overloaded new and delete is serialized by a process-global mutex, and is therefore a potential serialization-bottleneck that can slow down multithreaded programs? Or do modern heap implementations avoid or mitigate that problem somehow, and if so, how do they do it?
(Note: I'm tagging this question linux, to avoid the correct-but-uninformative "it's implementation-dependent" response, but I'd also be interested in hearing about how Windows and MacOS/X do it as well, if there are significant differences across implementations)
new and delete are thread safe
The following functions are required to be thread-safe:
The library versions of operator new and operator delete
User replacement versions of global operator new and operator delete
std::calloc, std::malloc, std::realloc, std::aligned_alloc, std::free
Calls to these functions that allocate or deallocate a particular unit of storage occur in a single total order, and each such deallocation call happens-before the next allocation (if any) in this order.
With gcc, new is implemented by delegating to malloc, and we see that their malloc does indeed use a lock. If you are worried about your allocation causing bottlenecks, write your own allocator.
Answer is yes, but in practice it is usually not a problem.
If it is a problem for you you may try replacing your implementation of malloc with tcmalloc that reduces, but does not eliminate possible contention(since there is only 1 heap that needs to be shared among threads and processes).
TCMalloc assigns each thread a thread-local cache. Small allocations are satisfied from the thread-local cache. Objects are moved from central data structures into a thread-local cache as needed, and periodic garbage collections are used to migrate memory back from a thread-local cache into the central data structures.
There are also other options like using custom allocators and/or specialized containers and/or redesigning your application.
As you tried to avoid the the answer is architecture/system dependant by trying to avoid the problem that multiple threads must serialize accesses, this only happens with heaps that grow or shrink when the program needs to expand it or return part of it to the system.
The first answer has to be simply it's implementation dependant, without any system dependencies, because normally, libraries get large chunks of memory to base the heap and they administer those internally, which makes the problem actually operating system and architecture independent.
The second answer is that, of course, if you have only one single heap for all the threads, you'll have a possible bottleneck in case all of the active threads compete for a single chunk of memory. There are several approaches to this, you can have a pool of heaps to allow parallelism, and make the different threads use different pools for their requests, think that the possible largest problem is in requesting memory, as this is the case when you have the bottleneck. On returning there's not such issue, as you can act more like a garbage collector in which you queue the returned chunks of memory and enqueue them for a thread to dispatch and put those chunks in the proper places to conserve the heaps integrities. Having multiple heaps allows even to classify them by priorities, by chunk sizes, etc. so the risk of collision is made low by the class or problem you are going to deal with. This is the case of operating system kernels like *BSD, which use several memory heaps, somewhat dedicated to the kind of use they are going to receive (there's one for the io-disk buffers, one for virtual memory mapped segments, one for process virtual memory space management, etc)
I recommend you to read The design and implementation of the FreeBSD Operating System which explains very well the approach used in the kernel of BSD systems. This is general enough and probably a great percentage of the other systems follow this or a very similar approach.

Is shared virtual memory used when multiple processes read a file using file pointer in Linux?

I wrote a C++ program which read a file using file pointer. And I need to run multiple process at the same time. Since the size of file can be huge (100MB~), to reduce memory usage in multiple processes, I think I need use shared memory. (For example IPC library like boost::interprocess::shared_memory_object)
But does it really need? Because I think if multiple processes read same file, then virtual memory of each processes mapped to same physical memory of file thru page table.
I read a Linux doc and they said,
Shared Virtual Memory
Although virtual memory allows processes to have separate (virtual)
address spaces, there are times when you need processes to share
memory. For example there could be several processes in the system
running the bash command shell. Rather than have several copies of
bash, one in each processes virtual address space, it is better to
have only one copy in physical memory and all of the processes running
bash share it. Dynamic libraries are another common example of
executing code shared between several processes. Shared memory can
also be used as an Inter Process Communication (IPC) mechanism, with
two or more processes exchanging information via memory common to all
of them. Linux supports the Unix TM System V shared memory IPC.
Also, wiki said,
In computer software, shared memory is either
a method of inter-process communication (IPC), i.e. a way of exchanging data between programs running at the same time. One process
will create an area in RAM which other processes can access, or
a method of conserving memory space by directing accesses to what would ordinarily be copies of a piece of data to a single instance
instead, by using virtual memory mappings or with explicit support of
the program in question. This is most often used for shared libraries
and for XIP.
Therefore, what I really curious is that does shared virtual memory supported by OS level or not?
Thanks in advance.
Regarding your first question - if you want your data to be accessible by multiple processes without duplication you'll definitely need some kind of a shared storage.
In C++ I'd surely use boost's shared_memory_object. That's a valid option to share (large) data among processes and it has good documentation with examples (http://www.boost.org/doc/libs/1_55_0/doc/html/interprocess/sharedmemorybetweenprocesses.html).
Using mmap() is a more low-level approach usually used in C. To use it as an IPC you'll have to make the mapped region shared. From http://man7.org/linux/man-pages/man2/mmap.2.html:
MAP_SHARED
Share this mapping. Updates to the mapping are visible to
other processes that map this file, and are carried
through to the underlying file. The file may not actually
be updated until msync(2) or munmap() is called.
Also on that page there's an example of mapping a file to shared memory.
In either case there are at least two things to remember:
You need synchronization if there are multiple processes that modify the shared data.
You can't use pointers, only offsets from the beginning of the mapped region.
Here's an explanation from the boost docs:
If several processes map the same file/shared memory, the mapping address will be surely different in each process. Since each process might have used its address space in a different way (allocation of more or less dynamic memory, for example), there is no guarantee that the file/shared memory is going to be mapped in the same address.
If two processes map the same object in different addresses, this invalidates the use of pointers in that memory, since the pointer (which is an absolute address) would only make sense for the process that wrote it. The solution for this is to use offsets (distance) between objects instead of pointers: If two objects are placed in the same shared memory segment by one process, the address of each object will be different in another process but the distance between them (in bytes) will be the same.
Regarding the OS support - yes, shred memory is an OS specific feature.
In Linux mmap() is actually implemented in kernel and modules and can be used to transfer data between user and kernel-space.
Windows also has it's specifics:
Windows shared memory creation is a bit different from portable shared memory creation: the size of the segment must be specified when creating the object and can't be specified through truncate like with the shared memory object. Take in care that when the last process attached to a shared memory is destroyed the shared memory is destroyed so there is no persistency with native windows shared memory.
Your question doesn't make sense.
I think I need use shared memory. (For example IPC library like boost::interprocess::shared_memory_object).
If you use shared memroy, the memory is shared.
I think if multiple processes read same file, then virtual memory of each processes mapped to same physical memory of file thru page table.
Now you're talking about memory-mapped I/O. It isn't the same thing. However more probably it is what you need in this situation.

Is memory allocation a system call?

Is memory allocation a system call? For example, malloc and new. Is the heap shared by different processes and managed by the OS. What about private heap? If memory allocation in the heap is managed by the OS, how expensive is this?
I would also like to have some link to places where I can read more about this topic.
In general, malloc and new do not perform a system call at each invocation. However, they use a lower-level mechanism to allocate large pages of memory. On Windows, the lower mechanism is VirtualAlloc(). I believe on POSIX systems, this is somewhat equivalent to mmap(). Both of these perform a system call to allocate memory to the process at the OS level. Subsequent allocations will use smaller parts of those large pages without incurring a system call.
The heap is normally inner-process and is not shared between processes. If you need this, most OSes have an API for allocating shared memory. A portable wrapper for these APIs is available in the Boost.Interprocess library.
If you would like to learn more about memory allocation and relationship with the OS, you should take a look at a good book on operating systems. I always suggest Modern Operating Systems by Andrew S. Tanenbaum as it is very easy to read.
(Assuming an operating system with memory protection. Might not be the case e.g. in embedded devices.)
Is memory allocation a system call?
Not necessarily each allocation. The process needs to call the kernel if its heap is not large enough for the requested allocation already, but C libraries usually request larger chunks when they do so, with the aim to reduce the number of system calls.
Is the heap shared by different processes and managed by the OS. What about private heap?
The heap is not shared between processes. It's shared between threads though.
How expensive kernel memory allocation system calls are depends entirely on the OS. Since that's a very common thing, you can expect it to be efficient under normal circumstances. Things get complicated in low RAM situations.
See the layered memory management in Win32.
Memory allocation is always a system call but the allocation is made as pages. If there are space available in the committed pages, memory manager will allocate the requested space without changing the kernel mode. The best thing about HeapAlloc is, it provides fine control over the allocation where Virtual Alloc round the allocation for a single page. It may result in excessive usage in memory.
Basically the default heap and private heaps are treated same except the default heap size is specified during the linking time. The default heap size is 1 MB and grows as required.
See this article for more details
You can also find more information in this thread
Memory allocation functions and language statements like malloc/free and new/delete are not a system calls. Malloc\free is a part of the C\C++ library and new\delete is a part of C++ runtime system. Calls of both can occasionally lead to the system calls. In the other languages memory allocation implemented in the similar way.
In general memory management can't be implemented without involving OS at all, because memory is one of the main system resources, and due to this global memory management made by OS kernel. But due to the fact that the system calls are relatively expensive, peoples try to design languages and memory allocation libraries in such a way to minimize amount of system calls.
As I know heap is an intra-process entity. That is mean that all memory allocation/deallocation requests are managed entirely by process itself. Operating system knows only the heap location and size and services two types of requests from the intra-process memory management system:
add memory page at virtual address X
release memory page from virtual address X
Local memory management system request these services when it decides that it haven't enough memory in the memory pool of heap and when it decides that it have too much memory in the memory pool of heap.
Despite the fact that the memory allocation is usually designed in such a way to minimize amount of system calls it still stay about order more expensive then memory allocation on stack. This is because the memory allocation\deallocation algorithms of heap are much more complex and expensive than the same of stack.

Can Win32 "move" heap-allocated memory?

I have a .NET/native C++ application. Currently, the C++ code allocates memory on the default heap which persists for the life of the application. Basically, functions/commands are executed in the C++ which results in allocation/modification of the current persistent memory. I am investigating an approach for cancelling one of these functions/commands mid-execution. We have hundreds of these commands, and many are very complicated (legacy) code.
The brute-force approach that I am trying to avoid is modifying each and every command/function to check for the cancellation and do all the appropriate clean-up (freeing heap memory). I am investigating a multi-threaded approach in which an additional thread receives the cancellation request and terminates the command-execution thread. I would want all dynamic memory to be allocated on a "private heap" using HeapCreate() (Win32). This way, the private heap could be destroyed by the thread handling the cancellation request. However, if the command runs to completion, I need the dynamic memory to persist. In this case, I would like to do the logical equivalent of "moving" the private heap memory to the default/process heap without incurring the cost of an actual copy. Is this in any way possible? Does this even make sense?
Alternatively, I recognize that I could just have a new private heap for every command/function execution (each will be a new thread). The private heap could be destroyed if the command is cancelled, or it would survive if the command completes. Is there any problem with the number of heaps growing indefinitely? I know there is some overhead involved with each heap. What limitations might I run into?
I am running on Windows 7 64-bit with 8GB RAM (consider this the target platform). The application I am working with is about 1 million SLOC (half C++, half C#). I am looking for any experience/suggestions with private heap management, or just alternatives to my solution.
You might be better off with separate processes instead of separate threads:
use memory mapped files (ie not a file at all - just cross-processed shared memory)
killing a process is 'cleaner' than killing a thread
I think you can have the shared memory 'survive' the killing without a move - you map/unmap instead of move
although you might need to do some memory management on your own.
Anyhow, worth looking into. I was looking into using inter-process memory for a few other things, and it had some unusual properties (can recall all of it clearly, it was a while ago), and you might be able to take advantage of it.
Just an idea!
From MSDN's Heap Functions page:
"Memory allocated by HeapAlloc is not movable. The address returned by HeapAlloc is valid until the memory block is freed or reallocated; the memory block does not need to be locked."
Can you re-link the legacy apps against your own malloc() implementation? If so, you should be able to manage without modifying the rest of the code. Your custom malloc library can track allocated blocks by thread, and have a "FreeAllByThreadId() function which you call after killing the legacy function's thread. You could use private heaps inside the library.
An alternative to private heaps might be doing your own allocation from memory-mapped files. See "Creating Named Shared Memory." You create the shared memory while initializing the alloc library for the legacy thread. On success, map it into the main thread so your c# can access it; on termination, close it and it is released to the system.
Heap is a sort of big chunk of memory. It is a user-level memory manager. A heap is created by lower-level system memory calls (e.g., sbrk in Linux and VirtualAlloc in Windows). In a a heap, then you can request or return a small chunk of memory by malloc/new/free/delete. By default, a process has a single heap (unlike stack, all threads share a heap). But, you can have many heaps.
Is it possible to combine two heaps w/o copying? A heap is essentially a data structure that maintains a list of used and freed memory chunks. So, a heap should have a sort of bookkeeping data called meta data. Of course, this meta data is per heap. AFAIK, no heap manager supports a merge operation of two heaps. I had reviewed entire source code of malloc implementation in Linux glibc (Doug Lea's implementation), but no such operation. Windows Heap* functions are also implemented in a similar way. So, it is currently impossible to move or merge two separate heaps.
Is it possible to have many heaps? I don't think there should be a big problem to have many heaps. As I said before, a heap is just a data structure that keeps used/freed memory chunks. So, there should be some amount of overhead. But, it's not that severe. When you look at one of malloc implementation, there is malloc_state, which is a basic data structure for each heap. For example, you can create another heap by create_mspace (in Windows, it is HeapCreate), then you will get a new malloc state. It's not that big. So, if this tread-off (some heap overhead vs. implementation easiness) is fine, then you may go on.
If I were you, I'll try the way you describe. It makes sense to me. Having a lot of heap objects would not make a big overhead.
Also, it should be noted that technically moving memory regions is impossible. Pointers that pointed the moved memory region will result in dangling pointers.
p.s. Your problem seems like a transaction, especially Software Transactional Memory. A typical implementation of STM buffers pending memory writes, and then commits to the real system memory it the transaction had no conflict.
No. Memory cannot be moved between heaps.