I understand that stack memory can only be shared by threads within same process.
In Inter-Process Communication, processes can share same segment of memory via shmget() system call.
What can this shared memory segment be? a heap or anything else?
Update:
I came up with this question after browsing questions about difference between stack and heap memory. Could heap memory be the shared memory segment via shmget()? That is, could heap memory be shared among multiple processes?
Update II:
Does a parent process share the same heap with its child process? I find something online:
"The heap, code and library regions of the parent are shared by the child. A new stack is allocated to the child and the parent's stack is copied into the child's stack."
Does this mean same heap is shared between difference processes?
"Also there might be a global heap (look at Win32 GlobalAlloc() family functions for example) which is shared between processes, persists for the system runtime and indeed can be used for interprocess communications." reference: Is heap memory per-process? (or) Common memory location shared by different processes?
In the Unix operating system, shared memory lives outside of any individual process space. By using shmat you basically get a pointer to some space that the kernel has allocated for you. These spaces can be shared between process and attached to by any number of processes. Like a file you can set permissions so that not every process can see and/or attach to the memory.
In this context shared memory is not a traditional stack or heap - it's a chunk of memory that the kernel gives you access to (assuming the correct permissions). Again, it lives outside of any one process space as the kernel manages it. Usually the memory remains in use even if no processes are attached to it. In Linux, an ipcs -m shows these segments.
I have a C++ application to be run on Oracle Linux OS.
Consider, I have created few objects with new operator. Though I have used delete operator to deallocate it, but the force kill command would not reach this implementation.
But, if I force kill (kill -9) the process, will the dynamically allocated memory (using new operator) be de-allocated by the operating system? As I am not able to find the straightforward answer to this, I would like to have some information.
Thanks in advance.
But, if I force kill (kill -9) the process, will the dynamically allocated memory (using new operator) be de-allocated by the operating system?
Memory is tied to a process through the virtual memory system and the memory management unit (MMU). Thus yes, all memory (not just the one allocated through new) will be freed.
Exceptions to this are global inter-process communication (IPC) resources like shared memory, cached files, etc.
When a process dies by whatever means, all process resources including memory and file objects are cleaned up by the kernel. When you kill a process it stops running immediately so no cleanup code including destuctors is are run. So yes all memory is deallocated but is happens at a much lower level than heaps and stacks.
In our code base memory is allocated for one object in one process then it is passed to another process, other process is supposed to free the memory of that object. valgrind we are only able to attach one process at a time so it is showing as memory leak
Edit1: Other process is not forked process, both are started independently .
Edit2: both processes are able to access one common address space .
If I enumerate heaps in my process using GetProcessHeaps API, is there a way to tell which module(s) were those heaps created by?
Here's why I need this: For the purpose of my security application I need to lock virtual memory used by my process (i.e. memory used by the Windows common controls, anything allocated via the new operator, COM, etc.)
The reason I need to know which module created the heap is to eliminate any DLLs that can be loaded into my process that have nothing to do with it. Say, as an example, TeamViewer loads into running processes to add whatever-they-need-it-for, so I don't want to lock its private heap, if it has one, etc.
If you are only concerned with you own allocations then you can just use your own private heap and override the default new and delete handlers to use your heap.
I have a .NET/native C++ application. Currently, the C++ code allocates memory on the default heap which persists for the life of the application. Basically, functions/commands are executed in the C++ which results in allocation/modification of the current persistent memory. I am investigating an approach for cancelling one of these functions/commands mid-execution. We have hundreds of these commands, and many are very complicated (legacy) code.
The brute-force approach that I am trying to avoid is modifying each and every command/function to check for the cancellation and do all the appropriate clean-up (freeing heap memory). I am investigating a multi-threaded approach in which an additional thread receives the cancellation request and terminates the command-execution thread. I would want all dynamic memory to be allocated on a "private heap" using HeapCreate() (Win32). This way, the private heap could be destroyed by the thread handling the cancellation request. However, if the command runs to completion, I need the dynamic memory to persist. In this case, I would like to do the logical equivalent of "moving" the private heap memory to the default/process heap without incurring the cost of an actual copy. Is this in any way possible? Does this even make sense?
Alternatively, I recognize that I could just have a new private heap for every command/function execution (each will be a new thread). The private heap could be destroyed if the command is cancelled, or it would survive if the command completes. Is there any problem with the number of heaps growing indefinitely? I know there is some overhead involved with each heap. What limitations might I run into?
I am running on Windows 7 64-bit with 8GB RAM (consider this the target platform). The application I am working with is about 1 million SLOC (half C++, half C#). I am looking for any experience/suggestions with private heap management, or just alternatives to my solution.
You might be better off with separate processes instead of separate threads:
use memory mapped files (ie not a file at all - just cross-processed shared memory)
killing a process is 'cleaner' than killing a thread
I think you can have the shared memory 'survive' the killing without a move - you map/unmap instead of move
although you might need to do some memory management on your own.
Anyhow, worth looking into. I was looking into using inter-process memory for a few other things, and it had some unusual properties (can recall all of it clearly, it was a while ago), and you might be able to take advantage of it.
Just an idea!
From MSDN's Heap Functions page:
"Memory allocated by HeapAlloc is not movable. The address returned by HeapAlloc is valid until the memory block is freed or reallocated; the memory block does not need to be locked."
Can you re-link the legacy apps against your own malloc() implementation? If so, you should be able to manage without modifying the rest of the code. Your custom malloc library can track allocated blocks by thread, and have a "FreeAllByThreadId() function which you call after killing the legacy function's thread. You could use private heaps inside the library.
An alternative to private heaps might be doing your own allocation from memory-mapped files. See "Creating Named Shared Memory." You create the shared memory while initializing the alloc library for the legacy thread. On success, map it into the main thread so your c# can access it; on termination, close it and it is released to the system.
Heap is a sort of big chunk of memory. It is a user-level memory manager. A heap is created by lower-level system memory calls (e.g., sbrk in Linux and VirtualAlloc in Windows). In a a heap, then you can request or return a small chunk of memory by malloc/new/free/delete. By default, a process has a single heap (unlike stack, all threads share a heap). But, you can have many heaps.
Is it possible to combine two heaps w/o copying? A heap is essentially a data structure that maintains a list of used and freed memory chunks. So, a heap should have a sort of bookkeeping data called meta data. Of course, this meta data is per heap. AFAIK, no heap manager supports a merge operation of two heaps. I had reviewed entire source code of malloc implementation in Linux glibc (Doug Lea's implementation), but no such operation. Windows Heap* functions are also implemented in a similar way. So, it is currently impossible to move or merge two separate heaps.
Is it possible to have many heaps? I don't think there should be a big problem to have many heaps. As I said before, a heap is just a data structure that keeps used/freed memory chunks. So, there should be some amount of overhead. But, it's not that severe. When you look at one of malloc implementation, there is malloc_state, which is a basic data structure for each heap. For example, you can create another heap by create_mspace (in Windows, it is HeapCreate), then you will get a new malloc state. It's not that big. So, if this tread-off (some heap overhead vs. implementation easiness) is fine, then you may go on.
If I were you, I'll try the way you describe. It makes sense to me. Having a lot of heap objects would not make a big overhead.
Also, it should be noted that technically moving memory regions is impossible. Pointers that pointed the moved memory region will result in dangling pointers.
p.s. Your problem seems like a transaction, especially Software Transactional Memory. A typical implementation of STM buffers pending memory writes, and then commits to the real system memory it the transaction had no conflict.
No. Memory cannot be moved between heaps.