I have 2 questions concerns about using shared memory. I'm using CreateFileMapping to create a shared memory area between two processes.
1) I understand that I need to call CloseHandle on every handle returned from a CreateFileMapping or OpenFileMapping call in order to release the memory. My question is, do all handles get closed appropriately and mem deallocated by Windows XP/7 if the programs using the shared memory exit without calling CloseHandle? IE - is there a possibility of a mem leak after all processes using the mem have been closed?
2) I use MapViewofFile to get a pointer to the mem. In 1 instance I've assumed that the shared memory will always exist in the context of a method. So I've saved the return value of MapViewOfFile as a pointer and closed the handle to the mem and am just using the pointer to the shared mem (but still locking access to it). Is this safe, or should I call MapViewOfFile every time I access the shared mem?
Thanks,
Ian
1) Yes, all handles are closed when a process terminates, no matter if it dies or finishes nicely. No leaks here.
2) As long as you don't call UnmapViewOfFile, the memory will still be accesible to the process, even if the handle has been closed:
Although an application may close the file handle used to create a file mapping object, the system holds the corresponding file open until the last view of the file is unmapped
Related
i have a boost interprocess managed_shared_memory on windows and i have a boost interprocess vector stored in it. The vector is created or opened by
auto* vec = shm.find_or_construct< MyVector >( "Data" )( shmAllocator );
as stated in the boost interprocess examples. My point is that i now constructed or opened an Object vec referencing the object inside the shared memory. I checked that the d'tor of vec is only called when i use shm.destroy<MyVector>("Data") and if i call delete vec the application crashes.
Now how do i properly release the object "vec" without destoying the underlying data?
The complete Scenario:
Two users are running my software, sharing data via shared memory (in windows emulated using a file)
One user exits the software and if i do not call destroy i have a memory leak, if i do call it as stated in the boost docs:
In Windows operating systems, current version supports an usually acceptable emulation of the UNIX unlink behaviour: the file is renamed with a random name and marked as to be deleted when the last open handle is closed
Another users starts the software and it tries to share the memory, but as the file has been renamed, it is unable to share the memory with the other running instance of my software.
The vector is created or opened by
This is slightly mixing up concepts. It's looked up, and constructed if necessary. (open_or_create applies to actual shareable objects like memory maps or shared memory objects).
I checked that the d'tor of vec is only called when i use shm.destroy<MyVector>("Data") and if i call delete vec the application crashes.
That's both by design.
One user exits the software and if i do not call destroy i have a memory leak,
Not really. If you don't destroy the vector, it still exists in the managed segment. This means that you can reopen the shared memory segment and still find it there.
To remove the shared segment, use remove()
The docs say this related to this:
When the managed_mapped_file object is destroyed, the file is automatically unmapped, and all the resources are freed. To remove the file from the filesystem you could use standard C std::remove or Boost.Filesystem's remove() functions, but file removing might fail if any process still has the file mapped in memory or the file is open by any process.
To obtain a more portable behaviour, use file_mapping::remove(const char *) operation, which will remove the file even if it's being mapped. However, removal will fail in some OS systems if the file (eg. by C++ file streams) and no delete share permission was granted to the file. But in most common cases file_mapping::remove is portable enough.
And here:
~basic_managed_mapped_file();
Destroys *this and indicates that the calling process is finished using the resource. The destructor function will deallocate any system resources allocated by the system for use by this process for this resource. The resource can still be opened again calling the open constructor overload. To erase the resource from the system use remove()
Additional info
If you really just want the vector gone after the last user releases it, use the interprocess shared_pointer: http://www.boost.org/doc/libs/1_64_0/doc/html/interprocess/interprocess_smart_ptr.html#interprocess.interprocess_smart_ptr.shared_ptr
I'm using boost::interprocess to share data between processes via managed shared memory. Clients may read or write to the shared memory.
Clients first attempt to open an existing shared memory:
managed_shared_memory(open_only, "MySharedMemory");
If the open fails then the memory is created:
managed_shared_memory(create_only, "MySharedMemory");
Once the shared memory has been opened or created the client increments the client count (integer stored in the shared memory).
When a client's destructor is called the client count is decremented. If client count == 0 then the shared memory is removed:
shared_memory_object::remove("MySharedMemory");
So far so good. However, if a process crashes then it can't decrement the client count so the memory isn't properly removed.
At this point a new client may successfully open the shared memory in whatever state it was left in instead of a fresh default state. This is problematic.
So my question is. What is the best way to manage the lifetime of shared memory?
Not crashing is a good idea but I'm working in a plugin environment where something beyond my control could take everything down and clients come and go continuously.
Another idea is to use pipes or sockets to verify that clients are still valid (e.g. ping them when the memory is opened and cleanup manually if there is no response) but this feels like overkill.
I have relatively new to C++ and I am learning from another guy's code.
His code reads from a mmapped file, but does not free any mapped memory in the end. In my understanding, mmap() map files into virtual memory. Don't I need to release those mapped memory in some way, like, calling munmap()?
I believe you should release mapped memory with munmap. But it will be released automatically (like close syscall for regular files or sockets) after exit(). Remember, that implicit closing/unmapping is bad style!
When you are done just use munmap() unless your program is exiting, then there is no need, it will unmap the segment(s) automatically at exit.
munmap happens automatically on exit
So if the program is going to exit anyways, you don't really need to do it.
man munmap 4.15 says:
The munmap() system call deletes the mappings for the specified address range, and causes further references to addresses within the range to generate invalid memory references. The region is also
automatically unmapped when the process is terminated. On the other hand, closing the file descriptor does not unmap the region.
If the program doesn't exit, of course, you leak memory, just as with malloc (which nowadays uses mmap).
I am learning working with shared memory in C++. I found that under Windows I need to use CreateFileMapping and MapViewOfFile functions. I want to share array of char so part of my code is:
HANDLE hBuffer = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 0, size, bufferName);
char * buffer = (char *) MapViewOfFile(hBuffer, FILE_MAP_ALL_ACCESS, 0, 0, size);
(there is checking for NULLs of course) and at the end of using shared memory I call:
UnmapViewOfFile(buffer); // returned true
CloseHandle(hBuffer); // returned true also
But in Resource monitor I can see that there was no memory released. When it is called several times, the allocated memory of application is increasing but there is no releasing. What am I doing wrong? Or is there another function to release shared memory?
Thanks for answers.
Problem solved thanks to marcin_j:
Your code looks find (well,... you mispelled HANDLE), you can use
procexp.exe from sysinternals to find your HANDLE by name (if it is
not found then it was closed), also observe on Performance tab how
Virtual Size of you app changes, there is also Handles count that
should change accordingly.
Also, observe what will happen after you execute
memset(buffer,0,size); after MapViewOfFile - this is actually when
system will commit memory and when your workin set will rise.
My above comment is wrong, CreateFileMapping by default applies
SEC_COMMIT which commits memory. But I suppose your memory is paged
until memset is called, after that call paged pages are moved to
physical memory, which rises working set.... if I am not wrong...
I currently try to implement some interprocess communication using the Windows CreateFileMapping mechanism. I know that I need to create a file mapping object with CreateFileMapping first and then create a pointer to the actual data with MapViewOfFile. The example then puts data into the mapfile by using CopyMemory.
In my application I have an image buffer (1 MB large) which I want to send to another process. So now I inquire a pointer to the image and then copy the whole image buffer into the mapfile. But I wonder if this is really necessary. Isn't it possible to just copy an actual pointer in the shared memory which points to the image buffer data? I tried a bit but didn't succeed.
Different processes have different address spaces. If you pass a valid pointer in one process to another process, it will probably point to random data in the second process. So you will have to copy all the data.
I strongly recommend you use Boost::interprocess. It has lots of goodies to manage this kind of stuff & even includes some special Windows-only functions in case you need to interoperate w/ other processes that use particular Win32 features.
The most important thing is to use offset pointers rather than regular pointers. Offset pointers are basically relative pointers (they store the difference between where the pointer is and where the thing pointed to is). This means that even if the two pointers are mapped to different address spaces, as long as the mappings are identical in structure then you are fine.
I've used all kinds of complicated data structures with offset smart pointers and it worked like a charm.
Shared Memory doesn't mean sending and receiving of Data. Its a memory created for number of processes without violation. For that you have to follow some mechanisms like locks so that the data will not corrupt.
In process 1 :
CreateFileMapping() : It will create the Shared Memory Block, with the name provided in last parameter, if it is not already present and returns back a handle (you may call it a pointer), if successful.
MapViewOfFile() : It maps (includes) this shared block in the process address space and returns a handle (again u can say a pointer).
With this pointer returned by MapViewOfFile() only you can access that shared block.
In process 2 :
OpenFileMapping() : If the shared memory block is successfully created by CreateFileMapping(), you can use it with the same name (name used to create the shared memory block).
UnmapViewOfFile() : It will unmap (you can remove the shared memory block from that process address space). When you are done using the shared memory (i.e. access, modification etc) call this function .
Closehandle() : finally to detach the shared memory block from process , call this with argument,handle returned by OpenFileMapping() or CreateFileMapping().
Though these functions look simple, the behaviour is tricky if the flags are not selected properly.
If you wish to read or write shared memory, specify PAGE_EXECUTE_READWRITE in CreateFileMapping().
Whenever you wish to access shared memory after creating it successfully, use FILE_MAP_ALL_ACCESS in MapViewOfFile().
It is better to specify FALSE (do not inherit handle from parent process) in OpenFileMapping() as it will avoid confusion.
You CAN get shared memory to use the same address over 2 processes for Windows. It's achieveable with several techniques.
Using MapViewOfFileEx, here's the significant experpt from MSDN.
If a suggested mapping address is
supplied, the file is mapped at the
specified address (rounded down to the
nearest 64K-boundary) if there is
enough address space at the specified
address. If there is not enough
address space, the function fails.
Typically, the suggested address is
used to specify that a file should be
mapped at the same address in multiple
processes. This requires the region of
address space to be available in all
involved processes. No other memory
allocation can take place in the
region that is used for mapping,
including the use of the VirtualAlloc
or VirtualAllocEx function to reserve
memory.
If the lpBaseAddress parameter
specifies a base offset, the function
succeeds if the specified memory
region is not already in use by the
calling process. The system does not
ensure that the same memory region is
available for the memory mapped file
in other 32-bit processes.
Another related technique is to use a DLL with a section marked Read + Write + Shared. In this case, the OS will pretty much do the MapViewOfFileEx call for you and for any other process which loads the DLL.
You may have to mark your DLL to a FIXED load address, not relocateable etc.. naturally.
You can use Marshalling of pointers.
If it's possible, it would be best to have the image data loaded/generated directly into the shared memory area. This eliminates the memory copy and puts it directly where it needs to be. When it's ready you can signal the other process, giving it the offset into your shared memory where the data begins.