When would one use Un-named shared memory? - c++

When would you choose to use un-named shared memory in windows?
it seems to me that message passing between threads is not very useful. One can instead pass a pointer to a struct/variable to the worker threads, and use that as shared memory instead, rather than calling the CreateFileMapping system call.

One reason to use unnamed shared memory is to restrict access to the file mapping to only those processes who are given a handle to it by the creating process. This avoid two problems:
any process that knows the name and has sufficient access to create a mapped file can squat on your named object, preventing or interfering with its legitimate use - this allows a denial of service attack.
accidental rather than malicious name clashes.
When you don't use a name, you can be sure that only processes that you want to have access, get it. From the MSDN docs for CreateFileMapping:
A single file mapping object can be
shared by multiple processes through
inheriting the handle at process
creation, duplicating the handle, or
opening the file mapping object by
name.

Section objects (aka "file mapping objects") are not just used to share memory between processes. The most obvious use of section objects is to map in a file to do I/O, and giving the objects names wouldn't be very useful in most cases. For unnamed pagefile-backed sections ("shared memory") you can still make child processes inherit the handle so they can use the sections.

You can pass handles to unnamed objects across process boundaries. That is to say, you can actually create an unnamed memory map in your application, and access it from another without using a name!
Look at the DuplicateHandle call, which can be used to pass handles to unnamed objects across process boundaries.

Related

How to share a global object between threads?

I have created a class, which has many public functions, some which write data and some that only read data.
It's required that I do this within 3 threads, I have no other option.
I know if I accessed a shared resource just to read, then I don't have to protect, but I don't know if it is any different when I am using a function to read a private variable of the shared resource.
E.g. I am trying to do...
globalObject.readColour();
which is a function that reads the colour of the global object.
Does it mean that I have to secure the thread at this point, or is it okay to just read the value without any risks?
I'm working on mbed, which supports c and c++98.
This question is similar to this one
If all your threads will only read the variable then you don't need mutex (or similar), but if any thread performs a writing operation you should use mutex.

How to wipe some contents of boost managed_shared_memory?

The boost::interprocess::managed_shared_memory manual and most other resources I checked always shows examples where there is a parent process and a bunch of children spawned by it.
In my case, I have several processes spawned by a third part application and I can only control the "children". It means I cannot have a central brain to allocate and deallocate the shared memory segment. All my processes must be able to do so (Therefore, I can't erase data on exit).
My idea was to open_or_create a segment and, using a lock stored (find_or_construct'ed) in this area, I check a certain hash to see if the memory area was created by this same software version.
If this is not true, the memory segment must be wiped to avoid breaking code.
Ideally, I would want to keep the lock object because there could be other processes already waiting on it.
Things I though:
List all object names and delete all but the lock.
This can not be done because the objects might be using different implementations
Also I couldn't find where to list the names.
Use shared_memory_object::truncate
I could not find much about it
By using a managed_shared_memory, I don't know how reliable it would be because I'm not sure the lock was the first allocated data.
Refcount the processes and wipe data on last one
Prone to fatal termination problems.
Use a separated shared memory area just for this bookkeeping.
Sounds reasonable, but overkill?
Any suggestions or insights?
This sounds like a "shared ownership" scenario.
What you'd usually think of in such a scenario, would be shared pointers:
http://www.boost.org/doc/libs/1_58_0/doc/html/interprocess/interprocess_smart_ptr.html#interprocess.interprocess_smart_ptr.shared_ptr
Interprocess has specialized shared pointers (and ditto make_shared) for exactly this purpose.
Creating the shared memory realm can be done "optimistically" from each participating process (open_or_create). Note that creation needs to be synchronized. Further segment manager operations are usually already implicitly synchronized:
Whenever the same managed shared memory is accessed from different processes, operations such as creating, finding, and destroying objects are automatically synchronized. If two programs try to create objects with different names in the managed shared memory, the access is serialized accordingly. To execute multiple operations at one time without being interrupted by operations from a different process, use the member function atomic_func() (see Example 33.11).

C++: DLL with memory mapped file

I have an DLL that might get called by multiple applications at the same time.
This DLL memory-maps a file.
I have 2 questions:
1) Each application will create its own instance of the DLL, right?
And thus, the file will be memory-mapped multiple times
2) If this is true, I don't understand what is happening here:
a) Application A calls the DLL.
b) Application B calls the DLL.
c) I quit application A, and the DLL will unmap the file.
d) Application B calls the DLL, and the memory-mapped file is not available anymore, and the call fails.
I don't understand this.
Does anybody do?
Thank you.
This happens because your assumption from 1) is false. A dll is by definition shared; both applications are using the same dll instance, so when you release the file in one application, it won't be available to the others.
To get around your issue, you should implement some reference counting mechanism in order to unmap the file only when no process is using it.
Edit: #sumeet is right. Each process has its own address space; when two processes load the same dll, they might share its read-only data for increased efficiency, but their writable data is local to each process. Nevertheless, a memory-mapped file is a kernel object, like semaphores, pipes and shared memory. Thus, if you unmap it in a process, you unmap it for all.
Edit2: From MSDN (Remarks section):
Multiple processes can share a view of the same file by either using a
single shared file mapping object or creating separate file mapping
objects backed by the same file. A single file mapping object can be
shared by multiple processes through inheriting the handle at process
creation, duplicating the handle, or opening the file mapping object
by name. For more information, see the CreateProcess, DuplicateHandle
and OpenFileMapping functions.
[...]
Mapped views of a file mapping object maintain internal references to
the object, and a file mapping object does not close until all
references to it are released. Therefore, to fully close a file
mapping object, an application must unmap all mapped views of the file
mapping object by calling UnmapViewOfFile and close the file mapping
object handle by calling CloseHandle. These functions can be called in
any order.
First of all, from the first paragraph, how is each app initializing the view?
From the second paragraph, I gather that calling UnmapViewofFile and CloseHandle from each app will release all references to the memory file, and then Windows will automatically release the associated resources (i.e. he keeps the reference count, you don't need to do it).
Post your memory mapping initialization and shutdown code for both apps.

Shared memory API, where a process can attach shared memory to other process

Can any one look into this and suggest me with an API.
We have APIs for a process which can create and/or attach a shared memory to its own process. But I don't find an API to attach a shared memory to one process by other process(for e.g., process A should call one API(like shmat()) to attach the shared memory to process B).
Shared memory doesn't belong to any particular process (unless you create it with a private IPC_PRIVATE key). It belongs to the system.
So, when you use shmget with a non-private key (and the IPC_CREAT flag), you will either create a shared memory block or attach to an existing one.
You need a way for both processes to use the same IPC key and this is often done by using ftok which uses a file specification and an identifier to give you an IPC key for use in the shmget call (and other IPC type calls, such as msgget or semget).
For example, in the programs pax1 and pax2, you may have a code segment like:
int getMyShMem (void) {
key_t mykey = ftok ("/var/pax.cfg", 0); // only one shm block so use id of 0
if (mykey == (key_t)-1) // no go.
return -1;
return shmget (mykey, 1024, IPC_CREAT); // get (or make) a 1K block.
}
By having both processes use the same file specification and ID, they'll get the same shared memory block.
You can use different IDs to give you distinct shared memory blocks all based on the same file (you may, for example, want one for a configuration shared memory block and another for storing shared state).
And, given that it's your configuration file the IPC key is based on, the chances of other programs using it is minuscule (I think it may be zero but I'm not 100% sure).
You can't forcefully inject shared memory into a process from outside that process (well, you may be able to but it would be both dangerous and require all sorts of root-level permissions). That would break the protected process model and turn you system into something about as secure as MS-DOS :-)
Let's see, allow one process to force a shared memory segment on to another? What is the receiver going to do with it? How will it know it now has mapped this block in - what is expected of it.
You're thinking about the problem the wrong way - simply hoisting a block of memory on to a second process is not going to allow you to do what you want. You need to notify the second process also that it has now mapped this block and so can start doing stuff with it. I suggest you take a step back and really look at your design and what you are doing. My recommended approach would be
A connects to B via some other IPC (say socket)
A informs B that it should attach with the details (name etc.)
B then attaches - and now B is aware of it and can start doing stuff with it. (say for example once the attach completes, B confirms to A, and then they can start talking over the shared memory block).
As for wrapping shared memory in a nice library - consider boost::interprocess.
You are asking to attach the process memory of other process, right?
Just open(2) the file /proc/<pid>/mem and use it. Check the /proc/<pid>/map for the list of usable address in the file.

Using shared memory under Windows. How to pass different data

I currently try to implement some interprocess communication using the Windows CreateFileMapping mechanism. I know that I need to create a file mapping object with CreateFileMapping first and then create a pointer to the actual data with MapViewOfFile. The example then puts data into the mapfile by using CopyMemory.
In my application I have an image buffer (1 MB large) which I want to send to another process. So now I inquire a pointer to the image and then copy the whole image buffer into the mapfile. But I wonder if this is really necessary. Isn't it possible to just copy an actual pointer in the shared memory which points to the image buffer data? I tried a bit but didn't succeed.
Different processes have different address spaces. If you pass a valid pointer in one process to another process, it will probably point to random data in the second process. So you will have to copy all the data.
I strongly recommend you use Boost::interprocess. It has lots of goodies to manage this kind of stuff & even includes some special Windows-only functions in case you need to interoperate w/ other processes that use particular Win32 features.
The most important thing is to use offset pointers rather than regular pointers. Offset pointers are basically relative pointers (they store the difference between where the pointer is and where the thing pointed to is). This means that even if the two pointers are mapped to different address spaces, as long as the mappings are identical in structure then you are fine.
I've used all kinds of complicated data structures with offset smart pointers and it worked like a charm.
Shared Memory doesn't mean sending and receiving of Data. Its a memory created for number of processes without violation. For that you have to follow some mechanisms like locks so that the data will not corrupt.
In process 1 :
CreateFileMapping() : It will create the Shared Memory Block, with the name provided in last parameter, if it is not already present and returns back a handle (you may call it a pointer), if successful.
MapViewOfFile() : It maps (includes) this shared block in the process address space and returns a handle (again u can say a pointer).
With this pointer returned by MapViewOfFile() only you can access that shared block.
In process 2 :
OpenFileMapping() : If the shared memory block is successfully created by CreateFileMapping(), you can use it with the same name (name used to create the shared memory block).
UnmapViewOfFile() : It will unmap (you can remove the shared memory block from that process address space). When you are done using the shared memory (i.e. access, modification etc) call this function .
Closehandle() : finally to detach the shared memory block from process , call this with argument,handle returned by OpenFileMapping() or CreateFileMapping().
Though these functions look simple, the behaviour is tricky if the flags are not selected properly.
If you wish to read or write shared memory, specify PAGE_EXECUTE_READWRITE in CreateFileMapping().
Whenever you wish to access shared memory after creating it successfully, use FILE_MAP_ALL_ACCESS in MapViewOfFile().
It is better to specify FALSE (do not inherit handle from parent process) in OpenFileMapping() as it will avoid confusion.
You CAN get shared memory to use the same address over 2 processes for Windows. It's achieveable with several techniques.
Using MapViewOfFileEx, here's the significant experpt from MSDN.
If a suggested mapping address is
supplied, the file is mapped at the
specified address (rounded down to the
nearest 64K-boundary) if there is
enough address space at the specified
address. If there is not enough
address space, the function fails.
Typically, the suggested address is
used to specify that a file should be
mapped at the same address in multiple
processes. This requires the region of
address space to be available in all
involved processes. No other memory
allocation can take place in the
region that is used for mapping,
including the use of the VirtualAlloc
or VirtualAllocEx function to reserve
memory.
If the lpBaseAddress parameter
specifies a base offset, the function
succeeds if the specified memory
region is not already in use by the
calling process. The system does not
ensure that the same memory region is
available for the memory mapped file
in other 32-bit processes.
Another related technique is to use a DLL with a section marked Read + Write + Shared. In this case, the OS will pretty much do the MapViewOfFileEx call for you and for any other process which loads the DLL.
You may have to mark your DLL to a FIXED load address, not relocateable etc.. naturally.
You can use Marshalling of pointers.
If it's possible, it would be best to have the image data loaded/generated directly into the shared memory area. This eliminates the memory copy and puts it directly where it needs to be. When it's ready you can signal the other process, giving it the offset into your shared memory where the data begins.