I'm using Boost Shared Memory to share a vector across processes.
In the client, how can I, once I try and open the shared memory and read a vector off it, realize if the memory is not valid, or is not what I'm looking for.
Will the Open_Only fail if the memory segment does not exist, and if so, how do I catch this failure?
Also, the shared memory segment is supposed to be removed, if there are no references to it. However, in my case, even when both the client and server are shut down, and nothing else is accessing the shared memory, the segment remains in Boost Interprocess folder in Program data, with some data. So the next time client starts up, it has no problem opening up the segment, and so thinks it is accessing correct data when in fact, there is no data to be shared.
Kindly advise. Thank you.
Speaking from experience with the underlying shm api--and not as a Boost expert...
To determine validity, one technique is to figure out if the current process is the one that is creating the shared memory (the first time). You can do this by getting the size after creating (fstat) and seeing if the size is zero. If it is zero, the process is creating it. Once you know that you can initialize it. Also, when you call truncate() to set the size here, that size is set for all other processes.
To ensure removal, you can call shm_unlink() to remove the shared memory file from the system. I believe in Boost there is a remove() api that will do that.
Related
I'm using boost::interprocess to share data between processes via managed shared memory. Clients may read or write to the shared memory.
Clients first attempt to open an existing shared memory:
managed_shared_memory(open_only, "MySharedMemory");
If the open fails then the memory is created:
managed_shared_memory(create_only, "MySharedMemory");
Once the shared memory has been opened or created the client increments the client count (integer stored in the shared memory).
When a client's destructor is called the client count is decremented. If client count == 0 then the shared memory is removed:
shared_memory_object::remove("MySharedMemory");
So far so good. However, if a process crashes then it can't decrement the client count so the memory isn't properly removed.
At this point a new client may successfully open the shared memory in whatever state it was left in instead of a fresh default state. This is problematic.
So my question is. What is the best way to manage the lifetime of shared memory?
Not crashing is a good idea but I'm working in a plugin environment where something beyond my control could take everything down and clients come and go continuously.
Another idea is to use pipes or sockets to verify that clients are still valid (e.g. ping them when the memory is opened and cleanup manually if there is no response) but this feels like overkill.
The boost::interprocess::managed_shared_memory manual and most other resources I checked always shows examples where there is a parent process and a bunch of children spawned by it.
In my case, I have several processes spawned by a third part application and I can only control the "children". It means I cannot have a central brain to allocate and deallocate the shared memory segment. All my processes must be able to do so (Therefore, I can't erase data on exit).
My idea was to open_or_create a segment and, using a lock stored (find_or_construct'ed) in this area, I check a certain hash to see if the memory area was created by this same software version.
If this is not true, the memory segment must be wiped to avoid breaking code.
Ideally, I would want to keep the lock object because there could be other processes already waiting on it.
Things I though:
List all object names and delete all but the lock.
This can not be done because the objects might be using different implementations
Also I couldn't find where to list the names.
Use shared_memory_object::truncate
I could not find much about it
By using a managed_shared_memory, I don't know how reliable it would be because I'm not sure the lock was the first allocated data.
Refcount the processes and wipe data on last one
Prone to fatal termination problems.
Use a separated shared memory area just for this bookkeeping.
Sounds reasonable, but overkill?
Any suggestions or insights?
This sounds like a "shared ownership" scenario.
What you'd usually think of in such a scenario, would be shared pointers:
http://www.boost.org/doc/libs/1_58_0/doc/html/interprocess/interprocess_smart_ptr.html#interprocess.interprocess_smart_ptr.shared_ptr
Interprocess has specialized shared pointers (and ditto make_shared) for exactly this purpose.
Creating the shared memory realm can be done "optimistically" from each participating process (open_or_create). Note that creation needs to be synchronized. Further segment manager operations are usually already implicitly synchronized:
Whenever the same managed shared memory is accessed from different processes, operations such as creating, finding, and destroying objects are automatically synchronized. If two programs try to create objects with different names in the managed shared memory, the access is serialized accordingly. To execute multiple operations at one time without being interrupted by operations from a different process, use the member function atomic_func() (see Example 33.11).
I'm using Boost shared memory to share vectors across different processes. However, on some occasions, the consumer of the shared memory throws up this exception:
Unexpected exception: The volume for a file has been externally altered so that the opened file is no longer valid.
I have the proper Synchronization mechanism set in place. What could this error indicate?
SOLVED Size of the memory hadn't been properly allocated upon creation by one of the processes.
When a shared memory object is created, its size is 0. To set the size of the shared memory, the user must use the truncate function call, in a shared memory that has been opened with read-write attributes
Source - Boost shared memory
It means the volume for a file has been externally altered. Look for other processes writing the file.
In other words, it means you do not have proper synchronization in place.
Do you use bip::managed_mapped_file::grow by any chance? The documentation states it only allows offline growing.
I want to create a shared memory segment for IPC between processes, but the variables that I want to put in that shared segment is changing dynamically and increasing all the time, the examples I saw are creating it with fixed size, and I looked in the class reference of QSharedMemory and found no function for resizing.
what to do but without suggesting to create new shared segments cause I want one segment with one key for availability at run time to other processes.
You can not, both applications should agree on a size and create memory for it at the beginning.
If you really want re-size it, you have to close the previous memory and create a new memory again.
In this case both applicatios must know what's going on.
I currently try to implement some interprocess communication using the Windows CreateFileMapping mechanism. I know that I need to create a file mapping object with CreateFileMapping first and then create a pointer to the actual data with MapViewOfFile. The example then puts data into the mapfile by using CopyMemory.
In my application I have an image buffer (1 MB large) which I want to send to another process. So now I inquire a pointer to the image and then copy the whole image buffer into the mapfile. But I wonder if this is really necessary. Isn't it possible to just copy an actual pointer in the shared memory which points to the image buffer data? I tried a bit but didn't succeed.
Different processes have different address spaces. If you pass a valid pointer in one process to another process, it will probably point to random data in the second process. So you will have to copy all the data.
I strongly recommend you use Boost::interprocess. It has lots of goodies to manage this kind of stuff & even includes some special Windows-only functions in case you need to interoperate w/ other processes that use particular Win32 features.
The most important thing is to use offset pointers rather than regular pointers. Offset pointers are basically relative pointers (they store the difference between where the pointer is and where the thing pointed to is). This means that even if the two pointers are mapped to different address spaces, as long as the mappings are identical in structure then you are fine.
I've used all kinds of complicated data structures with offset smart pointers and it worked like a charm.
Shared Memory doesn't mean sending and receiving of Data. Its a memory created for number of processes without violation. For that you have to follow some mechanisms like locks so that the data will not corrupt.
In process 1 :
CreateFileMapping() : It will create the Shared Memory Block, with the name provided in last parameter, if it is not already present and returns back a handle (you may call it a pointer), if successful.
MapViewOfFile() : It maps (includes) this shared block in the process address space and returns a handle (again u can say a pointer).
With this pointer returned by MapViewOfFile() only you can access that shared block.
In process 2 :
OpenFileMapping() : If the shared memory block is successfully created by CreateFileMapping(), you can use it with the same name (name used to create the shared memory block).
UnmapViewOfFile() : It will unmap (you can remove the shared memory block from that process address space). When you are done using the shared memory (i.e. access, modification etc) call this function .
Closehandle() : finally to detach the shared memory block from process , call this with argument,handle returned by OpenFileMapping() or CreateFileMapping().
Though these functions look simple, the behaviour is tricky if the flags are not selected properly.
If you wish to read or write shared memory, specify PAGE_EXECUTE_READWRITE in CreateFileMapping().
Whenever you wish to access shared memory after creating it successfully, use FILE_MAP_ALL_ACCESS in MapViewOfFile().
It is better to specify FALSE (do not inherit handle from parent process) in OpenFileMapping() as it will avoid confusion.
You CAN get shared memory to use the same address over 2 processes for Windows. It's achieveable with several techniques.
Using MapViewOfFileEx, here's the significant experpt from MSDN.
If a suggested mapping address is
supplied, the file is mapped at the
specified address (rounded down to the
nearest 64K-boundary) if there is
enough address space at the specified
address. If there is not enough
address space, the function fails.
Typically, the suggested address is
used to specify that a file should be
mapped at the same address in multiple
processes. This requires the region of
address space to be available in all
involved processes. No other memory
allocation can take place in the
region that is used for mapping,
including the use of the VirtualAlloc
or VirtualAllocEx function to reserve
memory.
If the lpBaseAddress parameter
specifies a base offset, the function
succeeds if the specified memory
region is not already in use by the
calling process. The system does not
ensure that the same memory region is
available for the memory mapped file
in other 32-bit processes.
Another related technique is to use a DLL with a section marked Read + Write + Shared. In this case, the OS will pretty much do the MapViewOfFileEx call for you and for any other process which loads the DLL.
You may have to mark your DLL to a FIXED load address, not relocateable etc.. naturally.
You can use Marshalling of pointers.
If it's possible, it would be best to have the image data loaded/generated directly into the shared memory area. This eliminates the memory copy and puts it directly where it needs to be. When it's ready you can signal the other process, giving it the offset into your shared memory where the data begins.