I have a process that internally makes "std::system" call. (I believe system call spawns a child process). In the system call, I am executing different application as
void generate()
{
std::system("./childProcess.exe);
}
The above "generate function" would be called by multiple threads.
Now, I need to share a "complex object" from Parent process to child process, i.e., childProcess.exe.
I tried with Boost::interprocess::shared_memory but in vain.
The complex object i mentioned earlier internally dynamically allocates memory many a times. I believe those dynamically allocated memories are not associated with my shared memory segment. Please correct me if otherwise
My class is like this
Complex compClass
{
int cnt;
subclass *subobj;
}
compClass::compClass()
{
subobj=new subclass;
subobj->func;
}
subclass::func()
{
YClass *y = new YClass;
}
and so on, it internally has many such memory allocations.
When I create a shared memory segment of object type "Comp class" in parent process, and open the shared memory segment in child process, i am able to access "cnt" variable in child process. But, I am not able to access subobject in child process.
I believe this is because subobject is dynamically allocated and we have different dynamic memory allocated in child processes and they are not associated with shared memory segment.
I found even for std::string, boost comes up with boost::interprocess::string as string internally makes "new" call.
Please suggest the best IPC mechanism to share this "Complex object" between multiple processes.
If you want to use that class member arrangement in shared memory, you'll have to replace the calls to new with ones that retrieve memory from a pool also in shared memory: you'll need to find or write a suitable pool allocator for that - the Standard doesn't provide one, but boost provides something pretty close using offsets instead of pointers (which is more flexible because insisting on loading shared memory at specific absolute virtual addresses isn't particularly reliable or scalable) - see here
Alternatives are to use threads, to serialise the data to shared memory and deserialise it (e.g. to a stringstream then copying the .str() data to shared memory), and to store the subclass object directly in Complex instead of having a pointer.
Related
In the Boost.Interprocess documentation Where is this being allocated? it is stated that Boost.Interprocess containers are placed in shared memory using two mechanisms at the same time:
Boost.Interprocess construct<>, find_or_construct<>... functions. These functions place a C++ object in the shared
memory. But this places only the object, not the memory that this
object may allocate dynamically.
Shared memory allocators. These allow allocating shared memory portions so that containers can allocate dynamically fragments
of memory to store newly inserted elements.
What is the use case to have a boost.vector where internal memory lives in the current process, but using a shared memory allocator so that elements are placed in shared memory ?
If I want to share this structure to another process :
struct Shared
{
vector<string> m_names;
vector<char> m_data;
};
I guess I want the vectors to be accessible to the other process so that it can iterate on them, right ?
find_or_construct and friends are for your own direct allocations.
The allocators are to be passed to library types to do their internal allocations in similar fashion. Otherwise, only the "control structure" (e.g. 16 bytes for a typical std::string) would be in the shared memory, instead of all the related data allocated by the standard library container internally.
Well, you cannot access the vector as such from the other process but you can access the elements (so in your example the strings) e.g. via a pointer
I am using boost::interprocess::managed_shared_memory to create memory to be shared across processes.
Following are the steps taken:
step
a) Create memory.
step
a) Open memory.
b) Write to memory
step
a) Open memory.
b) Read from memory.
c) Open memory.
d) Read from memory.
e) Open memory.
f) Read from memory.
g) ...... and so on and so forth!
Now, the question is, in step number 3, I am opening the memory again-and-again before reading it! I think this is redundant behavior.
How can I read multiple number of times by opening it only once?
Actually the open command is quite expensive in terms of performance, and this is proving to be a bottleneck in my application.
The idea should be to only open the shared resource (memory in this case) only once, and reuse the same handle/variable/object to assess it time-and-again.
Any of the following approaches will do:
If all the accesses are in a single function, use a variable with local scope to maintain the lifetime of the shared memory. Moreover, if the said method has other statements that do not need to access the shared resource, the resource can itself be enclosed in a pair of { } to ensure a local scope within the function scope.
Alternatively, same can be saved as pointer, and passed around functions in case the workflow involves calling several methods that use the shared resource.
Wrap the shared resource as an object (or a pointer) as the member variable of a class. The said class can either be created just to manage the shared memory, or an existing class which is utilizing the resource can take up ownership, depending upon the design.
Many of the samples have the managed_shared_memory in the main function for brevity.
You should, however, make it a member of a relevant class (with the responsibility to manage the lifetime of your shared memory mapping).
You could of course, keep it as a local variable in main, but then you'd be forced to keep passing it around in any function calls. (I do NOT recommend making it a global variable. Or a singleton for that matter).
I'm new to C++/CLI, and would like a clarification on memory free up.
Imagine a scenario, where :
sampleServer srv = new sampleServer()
while(true)
{
ABC newObject = srv.getItem();
}
ABC^ sampleServer::getItem(){ return gcnew ABC(//LIST OF some parameters);}
Now here, unless a Dispose is called by newObject, because of a continuous stream of new objects which are being returned and not getting freed up, memory will continue to be allocated, but not released.
If I do not wish to call the DIspose, the only other way is to have finalizerstaking care of the memory free up.
However, considering class sampleServer, it does not have the object of class ABS as one of its members. It is still able to return a new instance of it, because of the to classes being in the same namespace.
So if I am considering class sampleServer, what am I supposed to call the Finalizer on?
And if the thought process is incorrect, how would I free up memory in the case given above?
Since you are using gcnew with ABC, it must be a ref class.
Then, your compiler should be giving you:
'*' : cannot use this indirection on type 'ABC'
Now, suppose you fix that by replacing the syntax for the C++ * type modifier with the CLI ^ handle syntax, you effectively have:
while(true)
{
ABC^ newObject = srv->getItem();
}
The scenario you are considering does not exist. When newObject goes out of scope, the object it was referencing is not reachable by sweeping through connected objects from the garbage collection root object(s). So, when the garbage collector runs[1], all of the objects can be freed.
With a system of garbage collected objects, memory resources are handled by the system. You only need to concerned about resources that objects hold that are outside of the system. That would include "native heap" memory obtained through malloc or new, handles to various objects obtained through the operating system (file descriptors, sockets, windows, etc).
In your classes, you haven't shown any so-called "native resources" or any recursive ownership of native resources.
[1] Theoretically, except for performance reasons, the garbage collector need not run until no more virtual memory can be committed (e.g., the system is out of disk space). So, don't go looking at memory usage unless you are evaluating the garbage collector.
I'm trying to use an mmap-like segment to allocate objects on stl containers, for that I'm using boost::interprocess which provides with memory mappings, allocators and anonymous memory mapping support.
A bit like this
My problem is that the anonymous_shared_memory function here returns something that looks half mapped file and half shared memory(makes sense with mmap :) ) and although both styles work with interprocess allocators this one looks like its missing a segment_manager which does the actual chunk allocation.
As it returns a high-level mapped_region already mapped in the process but with no manager and no way that I can see to hook in a segment_manager.
A mapped_region is a low to mid-level object, and literally represents just the memory. Managed shared memory, however
is an advanced class that combines a shared memory object and a mapped region that covers all the shared memory object,
so it is the managed memory that possess the segment_manager.
Given that you want to use anonymous_shared_memory, first you'd get the memory_region as per the example, then you would use placement new to put a segment_manager at the beginning of it. Its constructor takes the size of the memory segment that it is being constructed in. I do not know if this includes the size of the manager, although I suspect it is included.
We have a need for multiple programs to call functions in a common library. The library functions access and update a common global memory. Each program’s function calls need to see this common global memory. That is one function call needs to see the updates of any prior function call even if called from another program.
For compatibility reasons we have several design constraints on how the functions exposed by the shared library must operate:
Any data items (both standard data types and objects) that are declared globally must be visible to all callers regardless of the thread in which the code is running.
Any data items that are declared locally in a function are only visible inside that function.
Any standard data type or an instance of any class may appear either locally or globally or both.
One solution is to put the library’s common global memory in named shared memory. The first library call would create the named shared memory and initialize it. Subsequent program calls would get the address of the shared memory and use it as a pointer to the global data structure. Object instances declared globally would need to be dynamically allocated in shared memory while object instances declared locally could be placed on the stack or in the local heap of the caller thread. Problems arise because initialized objects in the global memory can create and point to sub-objects which allocate (new) additional memory. These new allocations also need to be in the shared memory and seen by all library callers. Another complication is these objects, which contain strings, files, etc., can also be used in the calling program. When declared in the calling program, the object’s memory is local to the calling program, not shared. So the object’s code needs to handle either case.
It appears to us that the solution will require that we override the global placement new, regular new and delete operators. We found a design for a memory management system that looks like it will work but we haven’t found any actual implementations. If anyone knows of an implementation of Nathan Myers’ memory management design (http://www.cantrip.org/wave12.html?seenIEPage=1) I would appreciate a link to it. Alternatively if anyone knows of another shared memory manager that accommodates dynamically allocating objects I would love to know about it as well. I've checked the Boost libraries and all the other sources I can find but nothing seems to do what we need. We prefer not to have to write one ourselves. Since performance and robustness are important it would be nice to use proven code. Thanks in advance for any ideas/help.
Thanks for the suggestions about the ATL and OSSP libraries. I am checking them out now although I'm afraid ATL is too Wincentric if are target turns out to be Unix.
One other thing now seems clear to us. Since objects can be dynamically created during execution, the memory management scheme must be able to allocate additional pages of shared memory. This is now starting to look like a full-blown heap replacement memory manager.
Take a look at boost.interprocess.
OSSP mm - Shared Memory Allocation:
man 3 mm
As I'm sure you have found, this is a very complex problem, and very difficult to correctly implement. A few tips from my experiences. First of all, you'll definitely want to synchronize access to the shared memory allocations using semaphores. Secondly, any modifications to the shared objects by multiple processes need to be protected by semaphores as well. Finally, you need to think in terms of offsets from the start of the shared memory region, rather than absolute pointer values, when defining your objects and data structures (it's generally possible for the memory to be mapped at a different address in each attached process, although you can choose a fixed mapping address if you need to). Putting it all together in a robust manner is the hard part. It's easy for shared memory based data structures to become corrupted if a process should unexpectedly die, so some cleanup / recovery mechanism is usually required.
Also study mutexes and semaphores. When two or more entities need to share memory or data, there needs to be a "traffic signal" mechanism to limit write access to only one user.