What is the fastest possible Interprocess Communication (IPC) method on Windows 7? We would like to share only a memory blocks (two-way).
Is it ReadProcessMemory or something else?
We would like to use plain C but, for example, what does Boost library use for IPC?
ReadProcessMemory shouldn't even be listed as an IPC method; yes, it can be used as such, but it exists mainly for debugging purposes (if you check its reference, it's under the category "Debugging functions"), and it's surely slower than "real" shared memory because it copies the memory of a process into the specified buffer, while real shared memory doesn't have this overhead.
The full list of IPC methods supported by Windows is available on the MSDN; still, if you just have two applications that want to share a memory block, you should create a named memory-mapped file (backed by the paging file) with CreateFileMapping/MapViewOfFile, that should be the most straightforward and fastest method. The details of file mapping are described on its page on MSDN.
The relevant Boost IPC classes can act as a thin wrapper around shared memory, AFAIK it only encapsulates the calls to the relevant system-specific APIs, but in the end you get the usual pointer to the shared memory block, so operation should be as fast as using the native APIs.
Because of this I advise you to use Boost.Interprocess, since it's portable, C++-friendly (it provides RAII semantics) and does not give you any performance penalty after the shared memory block has been created (it can provide additional functionalities on shared memory, but they are all opt-in - if you just want shared memory you get just it).
Related
I'm coding in C++ a program that works as a socket between other programs in different languages (C#, Python till now). This socket reads data from the USB-port, do some stuff and stream it to the other programs.
My Idea: each program asks over a port-massage to be part of the stream. As response this program gets a pointer to a shared memory.
Is this possible? shared memory over different programing-languages? and can I just pass the pointer to the shared memory to the other program? and is there a way to do it cross-platform (Unix and Windows)?
Is this possible? shared memory over different programing-languages?
Yes. Memory is memory, so a shared-memory region created by language A can be accessed by a program written in language B.
and can I just passe the pointer to the shared memory to the other program?
It's not quite that simple; the other program won't be able to access the shared-memory region until it has mapped that same region into its own virtual-address space, using shmat() or MapViewOfFile() or similar.
Note that the shared-memory region will likely be mapped to a different range of addresses in each process that is attached to it, so don't rely on the pointer to a piece of shared data in process A being the same as a pointer to that same data in process B.
and is there a way to do it cross-platform ?
It looks like boost has an API for shared memory, if you want to go that route. Alternatively, you could write your own shared-memory API, with different code inside the .cpp file for Windows and POSIX OS's, with appropriate #ifdefs. That's how I did it in my library, and it works okay.
If you want to use shared memory, the answer above (by #JeremyFriesner) is a very good one.
However - as an alternaive to using shared memory, you can consider named pipes.
Some general information: Named pipe - Wikipedia.
Named pipes API is more high-level than shared memory.
Instead of having a raw block of memory you need to manage yourself (including using some kind of interporocess locking mechanism like named mutex to access shared data), you get a transport layer for messages.
Performance-wise they are quite efficient. But of course you will have to profile it for your specific case.
The API is available for all languages that you mentioned (and many more):
For C# (on Windows): How to: Use Named Pipes for Network Interprocess Communication.
For C++: on Windows: Named Pipes - Win32, and Linux: Named pipes - Linux.
For Python: Python named pipes.
I've developed a library in C++ that allows multi-threaded usage. I want to support an option for the caller to specify a cap on the memory allocated by a given thread. (We can ignore the case of one thread allocating memory and others using it.)
Possibly making this more complicated is that my library uses various open source components (boost, ICU, etc), some of which are statically linked and others dynamically.
One option I've been looking into is overriding the allocation functions (new/delete/etc) to do the bookkeeping per thread ID. Natural concerns come up around the bookkeeping: performance, etc.
But an even bigger question/concern is whether this approach will work with the open source components without code changes to them?
I can't seem to find pre-existing solutions for this, though it seems to me like it's not very unusual.
Any suggestions on this approach, or another approach?
EDIT: More background: The library can allocate a significantly large range of memory per calling thread depending on the input provided (ie. KBs to GBs).
So the goal of this request is to (more graciously & deterministically) support running in RAM-constrained environments. This is not for a hard-real-time environment with strict memory limits--it's to support a number of concurrent threads which each have a "safe" allocation cap to avoid engaging the page/swap file.
Basic example use case: a system with 32GB RAM, 20GB free, the application using my library may configure itself to use a max of 10 threads and configure the library to use a max of 1GB per thread.
Upon hitting the cap the current thread's call into the library will cease further work and return a suitable error. (The code is already fully RAII so unwinding cleanly is easy.)
BTW I found some interesting content on the web already, sadly none provide a lot of hope for a "simple & effective" solution. But this one is especially insightful.
This is related to a previous post:
Allocating a large memory block in C++
I would like a single C++ server running that generates a giant matrix M. Then, on the same machine I would like to run other programs that can contact this server, and get the memory address for M. M is read only, and the server creates it once. I should be able to spawn a client ./test and this programs should be able to make read only access to M. The server should always be running, but I can run other programs like ./test at anytime.
I don't know much about C++ or OS, what is the best way to do this? Should I use POSIX threads? The matrix is a primitive type (double, float etc), and all programs know its dimensions. The client programs require the entire matrix, so I don't want latency from mem copy from the server to the client, I just want to share that pointer directly. What are my best options?
One mechanism of inter-process communication you could definitely use for sharing direct access to you matrix M is shared memory. It means that the OS lets multiple processes access a shared segment in the memory, as if it was in their address space, by mapping it for each one requesting. A solution that answers all your requirements, and is also cross-platform is boost::interprocess. It is a thin portable layer that wraps all of the necessary OS calls. See a working example right here in the docs.
Essentially, your server process just needs to create an object of type boost::interprocess::shared_memory_object, providing the constructor with a name for the shared segment. When calling its truncate() method, the OS will look for a large enough segement in the address space of this server process. From this moment, any other process can create an object of the same type and provide the same name. Now it too has access to the exact same memory. No copies involved.
If for some reason you are unable to use the portable Boost libraries, or for other reason want to restrict the supported platform to Linux, use the POSIX API around the mmap() function. Here's the Linux man page. Usage is basically not far from the Boost pipeline described above. you create the named segment with shm_open() and truncate with ftruncate(). From there onwards you receive the mapped pointer to this allocated space by calling mmap(). In simpler cases where you'll only be sharing between parent and child processes, you can use this code example from this very website.
Of course, no matter what approach you take, when using a shared resource, make sure to synchronize read/writes properly so to avoid any race condition -- as you would have done in the scenario of multiple threads of the same process.
Of course other programs cannot access the matrix as long as it is in the "normal" process memory.
Without questioning the design approach: Yes, you have to use shared memory. Lookup functions like shmget(), shmat() etc. Then you don't need to pass the pointer to another program (which would not work, actually), you simply use the same file in ftok() everywhere to get access to the shared memory.
I wrote a C++ program which read a file using file pointer. And I need to run multiple process at the same time. Since the size of file can be huge (100MB~), to reduce memory usage in multiple processes, I think I need use shared memory. (For example IPC library like boost::interprocess::shared_memory_object)
But does it really need? Because I think if multiple processes read same file, then virtual memory of each processes mapped to same physical memory of file thru page table.
I read a Linux doc and they said,
Shared Virtual Memory
Although virtual memory allows processes to have separate (virtual)
address spaces, there are times when you need processes to share
memory. For example there could be several processes in the system
running the bash command shell. Rather than have several copies of
bash, one in each processes virtual address space, it is better to
have only one copy in physical memory and all of the processes running
bash share it. Dynamic libraries are another common example of
executing code shared between several processes. Shared memory can
also be used as an Inter Process Communication (IPC) mechanism, with
two or more processes exchanging information via memory common to all
of them. Linux supports the Unix TM System V shared memory IPC.
Also, wiki said,
In computer software, shared memory is either
a method of inter-process communication (IPC), i.e. a way of exchanging data between programs running at the same time. One process
will create an area in RAM which other processes can access, or
a method of conserving memory space by directing accesses to what would ordinarily be copies of a piece of data to a single instance
instead, by using virtual memory mappings or with explicit support of
the program in question. This is most often used for shared libraries
and for XIP.
Therefore, what I really curious is that does shared virtual memory supported by OS level or not?
Thanks in advance.
Regarding your first question - if you want your data to be accessible by multiple processes without duplication you'll definitely need some kind of a shared storage.
In C++ I'd surely use boost's shared_memory_object. That's a valid option to share (large) data among processes and it has good documentation with examples (http://www.boost.org/doc/libs/1_55_0/doc/html/interprocess/sharedmemorybetweenprocesses.html).
Using mmap() is a more low-level approach usually used in C. To use it as an IPC you'll have to make the mapped region shared. From http://man7.org/linux/man-pages/man2/mmap.2.html:
MAP_SHARED
Share this mapping. Updates to the mapping are visible to
other processes that map this file, and are carried
through to the underlying file. The file may not actually
be updated until msync(2) or munmap() is called.
Also on that page there's an example of mapping a file to shared memory.
In either case there are at least two things to remember:
You need synchronization if there are multiple processes that modify the shared data.
You can't use pointers, only offsets from the beginning of the mapped region.
Here's an explanation from the boost docs:
If several processes map the same file/shared memory, the mapping address will be surely different in each process. Since each process might have used its address space in a different way (allocation of more or less dynamic memory, for example), there is no guarantee that the file/shared memory is going to be mapped in the same address.
If two processes map the same object in different addresses, this invalidates the use of pointers in that memory, since the pointer (which is an absolute address) would only make sense for the process that wrote it. The solution for this is to use offsets (distance) between objects instead of pointers: If two objects are placed in the same shared memory segment by one process, the address of each object will be different in another process but the distance between them (in bytes) will be the same.
Regarding the OS support - yes, shred memory is an OS specific feature.
In Linux mmap() is actually implemented in kernel and modules and can be used to transfer data between user and kernel-space.
Windows also has it's specifics:
Windows shared memory creation is a bit different from portable shared memory creation: the size of the segment must be specified when creating the object and can't be specified through truncate like with the shared memory object. Take in care that when the last process attached to a shared memory is destroyed the shared memory is destroyed so there is no persistency with native windows shared memory.
Your question doesn't make sense.
I think I need use shared memory. (For example IPC library like boost::interprocess::shared_memory_object).
If you use shared memroy, the memory is shared.
I think if multiple processes read same file, then virtual memory of each processes mapped to same physical memory of file thru page table.
Now you're talking about memory-mapped I/O. It isn't the same thing. However more probably it is what you need in this situation.
Our app depends on an external, 3rd party-supplied configuration (including custom driving/decision making functions) loadable as .so file.
Independently, it cooperates with external CGI modules using a chunk of shared memory, where almost all of its volatile state is kept, so that the external modules can read it and modify it where applicable.
The problem is the CGI modules require a lot of the permanent config data from the .so as well, and the main app performs a whole lot of entirely unnecessary copying between the two memory areas to make the data available. The idea is to make the whole Shared Object to load into Shared Memory, and make it directly available to the CGI. The problem is: how?
dlopen and dlsym don't provide any facilities for assigning where to load the SO file.
we tried shmat(). It seems to work only until some external CGI actually tries to access the shared memory. Then the area pointed to appears just as private as if it was never shared. Maybe we're doing something wrong?
loading the .so in each script that needs it is out of question. The sheer size of the structure, connected with frequency of calls (some of the scripts are called once a second to generate live updates), and this being an embedded app make it no-go.
simply memcpy()'ing the .so into shm is not good either - some structures and all functions are interconnected through pointers.
The first thing to bear in mind when using shared memory is that the same physical memory may well be mapped into the two processes virtual address space as different addresses. This means that if pointers are used anywhere in your data structures, they are going to cause problems. Everything must work off an index or an offset to work correctly. To use shared memory, you will have to purge all the pointers from your code.
When loading a .so file, only one copy of the .so file code is loaded (hence the term shared object).
fork may also be your friend here. Most modern operating systems implement copy-on-write semantics. This means that when you fork, your data segments are only copied into separate physical memory when one process writes to the given data segment.
I suppose the easiest option would be to use memory mapped file, what Neil has proposed already. If this option does not fill well, alternative is to could be to define dedicated allocator. Here is a good paper about it: Creating STL Containers in Shared Memory
There is also excellent Ion GaztaƱaga's Boost.Interprocess library with shared_memory_object and related features. Ion has proposed the solution to the C++ standardization committee for future TR: Memory Mapped Files And Shared Memory For C++
what may indicate it's worth solution to consider.
Placing actual C++ objects in shared memory is very, very difficult, as you have found. I would strongly recommend you don't go that way - putting data that needs to be shared in shared memory or a memory mapped file is much simpler and likely to be much more robust.
You need to implement object's Serialization
Serialization function will convert your object into bytes, then you can write bytes in SharedMemory and have your CGI module to deserialize bytes back to object.