Our app depends on an external, 3rd party-supplied configuration (including custom driving/decision making functions) loadable as .so file.
Independently, it cooperates with external CGI modules using a chunk of shared memory, where almost all of its volatile state is kept, so that the external modules can read it and modify it where applicable.
The problem is the CGI modules require a lot of the permanent config data from the .so as well, and the main app performs a whole lot of entirely unnecessary copying between the two memory areas to make the data available. The idea is to make the whole Shared Object to load into Shared Memory, and make it directly available to the CGI. The problem is: how?
dlopen and dlsym don't provide any facilities for assigning where to load the SO file.
we tried shmat(). It seems to work only until some external CGI actually tries to access the shared memory. Then the area pointed to appears just as private as if it was never shared. Maybe we're doing something wrong?
loading the .so in each script that needs it is out of question. The sheer size of the structure, connected with frequency of calls (some of the scripts are called once a second to generate live updates), and this being an embedded app make it no-go.
simply memcpy()'ing the .so into shm is not good either - some structures and all functions are interconnected through pointers.
The first thing to bear in mind when using shared memory is that the same physical memory may well be mapped into the two processes virtual address space as different addresses. This means that if pointers are used anywhere in your data structures, they are going to cause problems. Everything must work off an index or an offset to work correctly. To use shared memory, you will have to purge all the pointers from your code.
When loading a .so file, only one copy of the .so file code is loaded (hence the term shared object).
fork may also be your friend here. Most modern operating systems implement copy-on-write semantics. This means that when you fork, your data segments are only copied into separate physical memory when one process writes to the given data segment.
I suppose the easiest option would be to use memory mapped file, what Neil has proposed already. If this option does not fill well, alternative is to could be to define dedicated allocator. Here is a good paper about it: Creating STL Containers in Shared Memory
There is also excellent Ion Gaztañaga's Boost.Interprocess library with shared_memory_object and related features. Ion has proposed the solution to the C++ standardization committee for future TR: Memory Mapped Files And Shared Memory For C++
what may indicate it's worth solution to consider.
Placing actual C++ objects in shared memory is very, very difficult, as you have found. I would strongly recommend you don't go that way - putting data that needs to be shared in shared memory or a memory mapped file is much simpler and likely to be much more robust.
You need to implement object's Serialization
Serialization function will convert your object into bytes, then you can write bytes in SharedMemory and have your CGI module to deserialize bytes back to object.
Related
I hope any expert using the boost managed shared memory can help me. I´m trying to write the memory to a file. I can not figure it out with the boost examples. ¿Can anyone provide me some examples?
Thanks.
If you truly need this, I see roughly 2 approaches, at a glance:
Copy it
use serialization/deserialization
or just copy by constructing clones in a different segment manager (obviously linked to a memory mapped file, this time)
Use a Managed External Buffer. A managed buffer is basically your segment manager on top of some transparent memory buffer. You can decide whether it exists in local process address space, shared memory, or indeed a memory-mapped file.
This is the supported method to use the same segment manager + segment data layout in both.
If you're really desperate you COULD try to bitwise copy the complete shared memory object into a file of equal size and simply open it. This might work IFF the managed_mapped_file implementation has exactly the same or compatible segement management structure, headers and layout. That's a long call though, and even if it appears to work, it's at best undocumented, and therefore likely invokes undefined behaviour.
Perhaps you are looking for mapped_file: http://www.boost.org/doc/libs/1_63_0/libs/iostreams/doc/classes/mapped_file.html
It is a memory-mapping API for files, and you can open the same file in multiple processes.
I'm using shared memory based on POSIX API on a linux machine to communicate between multiple MPI process. I have a working solution but I want to know how to make efficient use of the shared memory space for large data.
I have a machine with 64GB shared memory limit and it might happen that I have to write data > 64GB into this space, but these are smaller chunks of 1-2GB.
What I want to know is this:
How can I really delete the memory occupied by my 1-2GB chunk as soon as it's purpose is served and I don't want that data anymore?
I'm using shm_unlink() but it doesn't seem to clear space in /dev/shm/
Please help!
From the sum_unlink description in http://pubs.opengroup.org/onlinepubs/009695399/functions/shm_unlink.html
it seems that the memory removal may be actually postponed, read below:
If one or more references to the shared memory object exist when the object is unlinked, the name shall be removed before shm_unlink() returns, but the removal of the memory object contents shall be postponed until all open and map references to the shared memory object have been removed.
I hope this help.
I wrote a C++ program which read a file using file pointer. And I need to run multiple process at the same time. Since the size of file can be huge (100MB~), to reduce memory usage in multiple processes, I think I need use shared memory. (For example IPC library like boost::interprocess::shared_memory_object)
But does it really need? Because I think if multiple processes read same file, then virtual memory of each processes mapped to same physical memory of file thru page table.
I read a Linux doc and they said,
Shared Virtual Memory
Although virtual memory allows processes to have separate (virtual)
address spaces, there are times when you need processes to share
memory. For example there could be several processes in the system
running the bash command shell. Rather than have several copies of
bash, one in each processes virtual address space, it is better to
have only one copy in physical memory and all of the processes running
bash share it. Dynamic libraries are another common example of
executing code shared between several processes. Shared memory can
also be used as an Inter Process Communication (IPC) mechanism, with
two or more processes exchanging information via memory common to all
of them. Linux supports the Unix TM System V shared memory IPC.
Also, wiki said,
In computer software, shared memory is either
a method of inter-process communication (IPC), i.e. a way of exchanging data between programs running at the same time. One process
will create an area in RAM which other processes can access, or
a method of conserving memory space by directing accesses to what would ordinarily be copies of a piece of data to a single instance
instead, by using virtual memory mappings or with explicit support of
the program in question. This is most often used for shared libraries
and for XIP.
Therefore, what I really curious is that does shared virtual memory supported by OS level or not?
Thanks in advance.
Regarding your first question - if you want your data to be accessible by multiple processes without duplication you'll definitely need some kind of a shared storage.
In C++ I'd surely use boost's shared_memory_object. That's a valid option to share (large) data among processes and it has good documentation with examples (http://www.boost.org/doc/libs/1_55_0/doc/html/interprocess/sharedmemorybetweenprocesses.html).
Using mmap() is a more low-level approach usually used in C. To use it as an IPC you'll have to make the mapped region shared. From http://man7.org/linux/man-pages/man2/mmap.2.html:
MAP_SHARED
Share this mapping. Updates to the mapping are visible to
other processes that map this file, and are carried
through to the underlying file. The file may not actually
be updated until msync(2) or munmap() is called.
Also on that page there's an example of mapping a file to shared memory.
In either case there are at least two things to remember:
You need synchronization if there are multiple processes that modify the shared data.
You can't use pointers, only offsets from the beginning of the mapped region.
Here's an explanation from the boost docs:
If several processes map the same file/shared memory, the mapping address will be surely different in each process. Since each process might have used its address space in a different way (allocation of more or less dynamic memory, for example), there is no guarantee that the file/shared memory is going to be mapped in the same address.
If two processes map the same object in different addresses, this invalidates the use of pointers in that memory, since the pointer (which is an absolute address) would only make sense for the process that wrote it. The solution for this is to use offsets (distance) between objects instead of pointers: If two objects are placed in the same shared memory segment by one process, the address of each object will be different in another process but the distance between them (in bytes) will be the same.
Regarding the OS support - yes, shred memory is an OS specific feature.
In Linux mmap() is actually implemented in kernel and modules and can be used to transfer data between user and kernel-space.
Windows also has it's specifics:
Windows shared memory creation is a bit different from portable shared memory creation: the size of the segment must be specified when creating the object and can't be specified through truncate like with the shared memory object. Take in care that when the last process attached to a shared memory is destroyed the shared memory is destroyed so there is no persistency with native windows shared memory.
Your question doesn't make sense.
I think I need use shared memory. (For example IPC library like boost::interprocess::shared_memory_object).
If you use shared memroy, the memory is shared.
I think if multiple processes read same file, then virtual memory of each processes mapped to same physical memory of file thru page table.
Now you're talking about memory-mapped I/O. It isn't the same thing. However more probably it is what you need in this situation.
I am linking against 10 static library.
My binary file size is getting reduced when I am using dynamic library.
As I know using dynamic library will not reduce memory usage.
But my senior told me that using shared library will also reduce memory usage ? (when multiple process are running for the same executable code. )
Is that statement is right ?
he told me that as there will no duplicate copy of function used in library , so memory usage will be less. when you create n instance of that process.
When the process start it fork it's 10 children. So will using dynamic library in place of static library reduce total memory usages ?
In your example, dynamic libraries won't save you much. When you fork your process on a modern OS all the pages are marked copy on write rather than actually copied. So your static library is already shared between your 10 copies of your process.
However, where you can save is when the dynamic library is shared between different processes rather than forks of the same process. So if you're using the same glibc.so as another process, the two processes are sharing the physical pages of glibc.so, even though they are otherwise unrelated processes.
If you fork given process there shouldn't be much of a difference, because most operating systems use copy-on-write. This means that pages will only be copied if they're updated, so things like the code segments in shared libraries shouldn't be affected.
On the other hand different processes won't be able to share code if they're statically linked. Consider libc, which practically every binary links against... if they were all statically linked you'd end up with dozens of copies of printf in memory.
The bottom line is you shouldn't link your binaries statically unless you have an excellent reason for it.
Your senior in this instance is correct. A single copy of the shared library will be loaded into memory and will be used by every program that references it.
There is a post regarding this topic here:
http://www.linuxquestions.org/linux/articles/Technical/Understanding_memory_usage_on_Linux
I've long had a desire for an STLish container that I could place into a shared memory segment or a memory mapped file.
I've considered the use of a custom allocator and placement new to place a regular STL container into a shared memory segment. (like this ddj article). The problem is that STL containers will internally have pointers to the memory they own. Therefore, if the shared memory segment or memory mapped file loads at a different base address (perhaps on a subsequent run, or in a second process), then the internal pointers are suddenly invalid. As far as I can figure out, the custom allocator approach only works if you can always map the memory segment into your process at the same address. At least with memory mapped files, I have lots of experience of that NOT being the case if you just let the system map it where ever it feels like.
I've had some thoughts on how to do this, but I'd like to avoid it if someone else has already done the work (that's me, being lazy).
I'm currently leaving locking out of the discussion, as the best locking strategy is highly application dependent.
The best starting point for this is probably the boost Interprocess libraries. They have a good example of a map in shared memory here:
interprocess map
You will probably also want to read the section on offset smart pointers, which solves the internal pointer problem you were referring to.
Offset Pointer
You may also want to checkout the Intel Threading Building Blocks (TBB) Containers.
I always had good experiences (years ago) with ACE. Its a networking/communication framework, but has a section on shared memory.
I only know of proprietary versions. Bloomberg and EA have both published about their STL versions, but havent released ( to my knowledge ) the fruits of their labor.
Try using Qt's QSharedMemory Implementation.